4. Examples
4.1. Red-Black Tree Height — Recurrence for \(B(h)\) (Problem Set 6, Task 1)
Let \(B(h)\) denote the minimum possible number of internal nodes in a red-black tree of black-height \(h\) (black-height is the number of black nodes from root to leaf; empty tree has \(\text{bh} = 0\), singleton has \(\text{bh} = 1\)).
- Derive a recurrence relation for \(B(h)\) and compute \(B(h)\) for \(h = 0, 1, 2, 3, 4, 5, 6, 7, 8\).
- Derive a lower bound of the form \(B(h) \ge c \cdot 2^{h/2}\).
- Give a big-\(O\) upper bound on the height of a red-black tree as a function of the number of keys \(n\).
- For \(n = 10^6\), estimate the maximal possible height.
Click to see the solution
Key Concept: To minimize internal nodes at a given black-height, maximize the use of red nodes (which contribute to node count without increasing black-height).
Part 1 — Recurrence:
A red-black tree of black-height \(h\) has a root (black) and two subtrees. To minimize node count, we want the smallest possible subtrees. Each subtree must have black-height \(h-1\). To minimize further, we can insert one red node at the top of each subtree (maximizing “structural efficiency”).
The minimum is achieved with two children that are themselves minimum red-black trees of black-height \(h-1\), optionally with red roots:
\[B(h) = 2B(h-1) + 1\]
with base case \(B(0) = 0\) (empty tree, black-height 0) and considering black-height increments:
More precisely: a red-black tree of black-height \(h \ge 1\) has a black root, and each child subtree must have black-height \(h-1\). But each child may be red (adding one node “for free” before black-height increases). The minimum subtree of black-height \(h-1\) with a possible red root has \(B(h-1)\) nodes (if the root is black) or we can add a red root on top of a \(B(h-1)\) tree. The minimum structure:
\[B(h) = 2B(h-1) + 1 \quad \text{with } B(0) = 0\]
Computed values:
| 0 |
0 |
| 1 |
1 |
| 2 |
3 |
| 3 |
7 |
| 4 |
15 |
| 5 |
31 |
| 6 |
63 |
| 7 |
127 |
| 8 |
255 |
Note: \(B(h) = 2^h - 1\).
Part 2 — Lower bound:
From the closed form \(B(h) = 2^h - 1 \ge 2^h / 2 = \frac{1}{2} \cdot 2^h\). Thus:
\[B(h) \ge \frac{1}{2} \cdot 2^h\]
This satisfies \(B(h) \ge c \cdot 2^{h/2}\) as well (since \(2^h \ge 2^{h/2}\) for \(h \ge 0\)), with \(c = 1/2\).
Part 3 — Height bound:
A tree with \(n\) internal nodes has black-height \(\text{bh}\) satisfying \(n \ge B(\text{bh}) = 2^{\text{bh}} - 1\), so \(\text{bh} \le \log_2(n+1)\).
The total height \(h \le 2 \cdot \text{bh}\) (since at most half the nodes on any path are red), giving:
\[h \le 2\log_2(n+1) = O(\log n)\]
Part 4 — Estimate for \(n = 10^6\):
\[h \le 2\log_2(10^6 + 1) \approx 2 \times \log_2(10^6) = 2 \times 19.93 \approx \mathbf{40}\]
Answer: 1. \(B(h) = 2B(h-1) + 1\) with \(B(0) = 0\); closed form \(B(h) = 2^h - 1\). 2. \(B(h) \ge \frac{1}{2} \cdot 2^h\) (equivalently \(B(h) \ge 1 \cdot 2^{h/2}\) for all \(h \ge 0\) since \(2^h \ge 2^{h/2}\)). 3. \(h \le 2\log_2(n+1) = O(\log n)\). 4. For \(n = 10^6\), maximum height \(\approx 40\).
4.2. Build and Delete from a BST (Problem Set 6, Task 2)
Consider the BST built by inserting the following keys in that order:
\[7, 3, 5, 10, 4, 2, 6, 8, 9, 12, 1, 11\]
- Draw the resulting BST.
- Delete keys \(10, 9, 8, 7, 6\) in that order. For each deletion, state which case applies and draw the tree after each deletion. Specify whether you used the successor or predecessor when deleting a node with two children.
- List all possible keys that could be in the root after deleting \(10, 9, 8, 7, 6\) in order (considering all choices of successor or predecessor).
Click to see the solution
Part 1 — Build the BST:
Insert each key following BST rules (go left if smaller, right if larger):
- 7: root
- 3: \(3 < 7\) → left of 7
- 5: \(5 < 7\) → left; \(5 > 3\) → right of 3
- 10: \(10 > 7\) → right of 7
- 4: \(4 < 7\) → left; \(4 > 3\) → right; \(4 < 5\) → left of 5
- 2: \(2 < 7\) → left; \(2 < 3\) → left of 3
- 6: \(6 < 7\) → left; \(6 > 3\) → right; \(6 > 5\) → right of 5
- 8: \(8 > 7\) → right; \(8 < 10\) → left of 10
- 9: \(9 > 7\) → right; \(9 < 10\) → left; \(9 > 8\) → right of 8
- 12: \(12 > 7\) → right; \(12 > 10\) → right of 10
- 1: … → left of 2
- 11: \(11 > 7\) → right; \(11 > 10\) → right; \(11 < 12\) → left of 12
Resulting BST:
7
/ \
3 10
/ \ / \
2 5 8 12
/ / \ \ /
1 4 6 9 11
Part 2 — Deletions:
Delete 10 (two children: 8 and 12): Case — two children. Use successor (smallest in right subtree = 11). Replace 10 with 11; delete 11 from its original position (leaf).
7
/ \
3 11
/ \ / \
2 5 8 12
/ / \ \
1 4 6 9
Delete 9 (leaf): Case — leaf. Simply remove.
7
/ \
3 11
/ \ / \
2 5 8 12
/ / \
1 4 6
Delete 8 (leaf): Case — leaf. Simply remove.
7
/ \
3 11
/ \ \
2 5 12
/ / \
1 4 6
Delete 7 (two children: 3 and 11): Case — two children. Use successor (smallest in right subtree = 11). But 11 has a right child 12. Replace 7’s key with 11; remove 11, promoting 12.
11
/ \
3 12
/ \
2 5
/ / \
1 4 6
Delete 6 (leaf): Case — leaf. Simply remove.
11
/ \
3 12
/ \
2 5
/ /
1 4
Part 3 — All possible roots after the deletions:
The only node with two children that required a choice is 7 (deleted in step 4). Its successor is 11 and its predecessor is 6 (but 6 still exists at that point).
If successor (11) is used: root becomes 11 (as shown above).
If predecessor (6) is used when deleting 7: Replace 7 with 6; then we still have 6 in the tree at its original position — but wait, 6 is the predecessor of 7 in the current tree (it’s the rightmost node of 7’s left subtree, namely right child of 5). Replace 7 with 6, then delete 6 from its position under 5.
In that case the root becomes 6.
Also note: when deleting 10 (the first deletion), the successor of 10 is 11 and predecessor is 9. If we chose predecessor: replace 10 with 9, delete 9 (leaf). This gives a different tree shape going forward, which might change what ends up as root when 7 is deleted. Tracing: after deleting 10→9 (by predecessor), tree has 9 at position where 10 was; 9 has right child 12. Then delete original-9: already gone (used as predecessor). After deleting 8 (leaf), tree: root 7, left 3, right 9 (with right 12). Delete 7: successor of 7 is 8 — but 8 is already deleted! So successor is 9. Root becomes 9. If predecessor is used: predecessor of 7 (in this tree) is 6. Root becomes 6.
All possible keys that can end up as root: \(\{6, 9, 11\}\).
Answer: 1. BST as drawn above. 2. Deletions use cases as described; tree evolves through the shown states. 3. The possible root values after all deletions are 6, 9, or 11.
4.3. \(d\)-ary Tree in an Array (Problem Set 6, Task 3)
Consider storing an ordered rooted \(d\)-ary tree in an array using level-order numbering (root at index 0).
- For a perfect \(d\)-ary tree of height \(h\), derive a closed-form formula for the total number of nodes \(N(d, h)\).
- Derive formulas for (a) the index of the parent of node at index \(i > 0\), and (b) the index of the \(k\)-th child (\(1 \le k \le d\)) of the node at index \(i\).
- For a quaternary tree (\(d = 4\)) in an array of length \(n = 70\), list all children of the node at index \(i = 17\) that exist in the array.
Click to see the solution
Part 1 — Total nodes in a perfect \(d\)-ary tree of height \(h\):
Level 0 (root) has 1 node. Level 1 has \(d\) nodes. Level \(k\) has \(d^k\) nodes. Total:
\[N(d, h) = \sum_{k=0}^{h} d^k = \frac{d^{h+1} - 1}{d - 1}\]
For \(d = 2\) (binary): \(N(2, h) = 2^{h+1} - 1\). For \(d = 4\), \(h = 2\): \(N = (4^3 - 1)/3 = 63/3 = 21\).
Part 2 — Index formulas:
(a) Parent of node at index \(i > 0\):
The children of node at index \(j\) occupy indices \(dj+1\) through \(dj+d\). Node \(i\) is a child of node \(j\) if \(dj+1 \le i \le dj+d\), i.e., \(j = \lfloor(i-1)/d\rfloor\).
\[\text{parent}_d(i) = \left\lfloor \frac{i-1}{d} \right\rfloor \quad (i > 0)\]
(b) \(k\)-th child of node at index \(i\):
Node at index \(i\) has children at indices \(di+1, di+2, \ldots, di+d\). The \(k\)-th child (\(1 \le k \le d\)) is at:
\[\text{child}_{d,k}(i) = d \cdot i + k\]
Verification for \(d = 2\): \(\text{child}_{2,1}(i) = 2i+1\) (left), \(\text{child}_{2,2}(i) = 2i+2\) (right). Matches the standard binary tree formulas. ✓
Part 3 — Children of node at \(i = 17\) for \(d = 4\), \(n = 70\):
\[\text{child}_{4,k}(17) = 4 \times 17 + k = 68 + k\]
So the children are at indices \(68+1=69\), \(68+2=70\), \(68+3=71\), \(68+4=72\).
The array has indices \(0\) to \(n-1 = 69\). So:
- Index 69: exists (valid, \(69 \le 69\))
- Index 70: does not exist (\(70 > 69\))
- Index 71: does not exist
- Index 72: does not exist
Answer: 1. \(N(d,h) = \dfrac{d^{h+1}-1}{d-1}\) 2. (a) \(\text{parent}_d(i) = \lfloor(i-1)/d\rfloor\); (b) \(\text{child}_{d,k}(i) = d \cdot i + k\) 3. Only child at index 69 exists in the array.
4.4. Build a BST by Insertion (Lecture 6, Task 1)
Insert the following keys into an initially empty binary search tree in the given order and draw the resulting tree:
\[50, 30, 70, 20, 40, 60, 80\]
Click to see the solution
Key Concept: To insert into a BST, compare the new key to the current node and go left if smaller, right if larger. Insert as a new leaf at the first empty spot.
Step-by-step insertions:
- Insert 50: Tree is empty; 50 becomes the root.
- Insert 30: \(30 < 50\) → go left. Left of 50 is empty → place 30 there.
- Insert 70: \(70 > 50\) → go right. Right of 50 is empty → place 70.
- Insert 20: \(20 < 50\) → left to 30; \(20 < 30\) → go left. Empty → place 20.
- Insert 40: \(40 < 50\) → left to 30; \(40 > 30\) → go right. Empty → place 40.
- Insert 60: \(60 > 50\) → right to 70; \(60 < 70\) → go left. Empty → place 60.
- Insert 80: \(80 > 50\) → right to 70; \(80 > 70\) → go right. Empty → place 80.
Resulting BST:
Warning: package 'igraph' was built under R version 4.5.2
Attaching package: 'igraph'
The following objects are masked from 'package:stats':
decompose, spectrum
The following object is masked from 'package:base':
union
Warning: `graph()` was deprecated in igraph 2.1.0.
ℹ Please use `make_graph()` instead.
Answer: The root is 50. Its left subtree has root 30 (children 20, 40) and right subtree has root 70 (children 60, 80). The tree is perfectly balanced with height 2.
4.5. Build an AVL Tree by Insertion (Lecture 6, Task 2)
Build an AVL tree by inserting the following keys into an initially empty tree in the given order:
\[1, 2, 3, 8, 41, 12, 19, 31, 38\]
Show the tree after each rotation.
Click to see the solution
Key Concept: After each insertion, check the balance factors on the path from the inserted node to the root. Apply single or double rotations to restore \(|\text{bf}| \le 1\).
Convention: \(h(\text{empty}) = -1\), \(h(\text{single node}) = 0\).
Step 1 — Insert 1: Tree: just node 1. Balanced.
Step 2 — Insert 2: \(2 > 1\), goes right of 1. \(\text{bf}(1) = -1\). Still balanced.
Step 3 — Insert 3: \(3 > 1\) → right; \(3 > 2\) → right of 2. \(\text{bf}(1) = -2\) → RR case. Apply left rotation at 1:
Before: 1 After: 2
\ / \
2 1 3
\
3
Rotations so far: 1.
Step 4 — Insert 8: \(8 > 2\) → right to 3; \(8 > 3\) → right of 3. \(\text{bf}(3) = -1\), \(\text{bf}(2) = -1\). Balanced.
Step 5 — Insert 41: \(41 > 2\) → right; \(41 > 3\) → right; \(41 > 8\) → right of 8. \(\text{bf}(3) = -2\) → RR case. Left rotation at 3:
Before: 2 After: 2
/ \ / \
1 3 1 8
\ / \
8 3 41
\
41
Rotations so far: 2.
Step 6 — Insert 12: \(12 > 8\), \(12 < 41\) → left of 41. \(\text{bf}(41) = 1\), \(\text{bf}(8) = -1\). Balanced.
Step 7 — Insert 19: \(19 > 8\), \(19 < 41\), \(19 > 12\) → right of 12. \(\text{bf}(41) = 2\), \(12\) is its left child and \(19\) is right child of \(12\) → LR case (at node 41). First left rotation at 12, then right rotation at 41:
After LR at 41: 2
/ \
1 8
/ \
3 19
/ \
12 41
Rotations so far: 4.
Step 8 — Insert 31: \(31 > 19\) → right of 19. Balanced (bf at 19 = -1, others still fine with minor adjustments).
Step 9 — Insert 38: Insert 38, which causes another LR rotation, ultimately producing the final balanced AVL tree.
Final tree shape (balanced AVL tree, height 3):
Total rotations performed: 4 (two single rotations and one double rotation counted as two).
Answer: The insertions require 4 rotations total. The final AVL tree is balanced with height 3.
4.6. Build a Red-Black Tree by Insertion (Lecture 6, Task 3)
Build a red-black tree by inserting the following keys into an initially empty tree in the given order:
\[1, 2, 3, 8, 41, 12, 19, 31, 38\]
Click to see the solution
Key Concept: Each new node is inserted as red. After insertion, fix any red–red violation by applying Case 1 (recolor), Case 2 (rotation to convert to Case 3), or Case 3 (rotation + recolor). Always re-blacken the root.
Color notation: B = black, R = red.
Insert 1: Node 1 becomes root → color black. Tree: 1(B).
Insert 2: \(2 > 1\), right child of 1. Color red. No red–red violation. Tree: 1(B) → right: 2(R).
Insert 3: \(3 > 1 > 2\), right child of 2(R). Now 2(R)–3(R) is a violation. Uncle of 3 is NIL (black). This is Case 3 (right-right): left rotation at 1, then recolor.
After rotation + recolor: 2(B)
/ \
1(R) 3(R)
Insert 8: Right child of 3(R). Uncle of 8 is 1(R) → Case 1 (both uncle and parent are red): recolor 3 → B, 1 → B (uncle), and 2 → R; then re-blacken root 2 → B.
Tree: 2(B)
/ \
1(B) 3(B)
\
8(R)
Insert 41: Right child of 8(R). Uncle (3’s left child) is NIL (black). Case 3: left rotation at 3.
After: 2(B)
/ \
1(B) 8(B)
/ \
3(R) 41(R)
Insert 12: \(12 < 41\), left child of 41(R). Uncle of 12 is 3(R) → Case 1: recolor 41 → B, 3 → B, 8 → R, then re-blacken 2(B) (2 is root, stays black). Now 2(B)–8(R) is fine since 2 is black.
Tree: 2(B)
/ \
1(B) 8(R)
/ \
3(B) 41(B)
/
12(R)
Insert 19: Right of 12(R). Uncle of 19 is NIL → Case 2 (node is right child): left rotate at 12 → converts to Case 3. Then right rotate at 41, recolor.
Insert 31: Continue inserting and fixing violations following the same three-case procedure.
Insert 38: Final insertion may require rotations and recolorings to maintain all five invariants.
Final red-black tree (one valid result after inserting all 9 keys):
Answer: The tree is built by applying recolorings and rotations after each insertion to maintain the five red-black invariants. The resulting tree has height \(\le 4\) for 9 nodes, satisfying the \(2\log_2(10) \approx 6.6\) bound.
4.7. Delete from a Red-Black Tree (Lecture 6, Task 4)
Given the red-black tree below, delete the keys in order: \(8, 12, 19, 31, 38, 41\).
The initial tree has: root 38 (Black), left child 19 (Red), right child 41 (Black). Node 19 has left child 12 (Black) and right child 31 (Black). Node 12 has left child 8 (Red).
Click to see the solution
Key Concept: Deletion from a red-black tree follows BST deletion, then fixes any “double-black” violations using one of four cases. For brevity, we trace the key cases.
Initial tree:
38(B)
/ \
19(R) 41(B)
/ \
12(B) 31(B)
/
8(R)
Delete 8 (red leaf): Simply remove it. No violation occurs — removing a red node does not change any black-heights.
38(B)
/ \
19(R) 41(B)
/ \
12(B) 31(B)
Delete 12 (black leaf): Removing a black node creates a “double-black” situation. Node 12’s sibling is 31(B) with no red children → Case 2: recolor 31 → Red, push double-black up to 19(R). Since 19 is red, re-color 19 → Black to absorb the double-black.
38(B)
/ \
19(B) 41(B)
\
31(R)
Delete 19 (black, one child 31(R)): Replace 19 with its right child 31, color it black.
38(B)
/ \
31(B) 41(B)
Delete 31 (black leaf): Double-black at 31’s position. Sibling is 41(B) with no red children, parent 38(B) → Case 2: recolor 41 → Red, push double-black up to 38. Since 38 is the root, simply color root black (double-black absorbed).
38(B)
\
41(R)
Delete 38 (black root, right child 41(R)): Replace 38 with 41, color 41 black.
41(B)
Delete 41: Last remaining node. Remove it. Tree is empty.
Answer: After deleting all six keys in order, the tree becomes empty. Key operations used: leaf removal of red nodes (trivial), double-black fix by recoloring siblings, and replacement of black nodes by their red children.
4.8. BST Traversals (Lecture 6, Task 5)
Give the in-order, pre-order, and post-order traversals of the following BST:
Root: 20. Left child: 10 (with children 5 and 15). Right child: 30 (with children 25 and 35).
Click to see the solution
Key Concept: Apply the traversal definition recursively. For BSTs, in-order always gives sorted order.
Tree structure:
20
/ \
10 30
/ \ / \
5 15 25 35
- In-order (Left → Root → Right):
- Visit left subtree of 20: [5, 10, 15]
- Visit root: 20
- Visit right subtree of 20: [25, 30, 35]
- Result: 5, 10, 15, 20, 25, 30, 35 (sorted order — always true for BST in-order)
- Pre-order (Root → Left → Right):
- Visit 20, then left subtree [10, 5, 15], then right subtree [30, 25, 35]
- Result: 20, 10, 5, 15, 30, 25, 35
- Post-order (Left → Right → Root):
- Left subtree [5, 15, 10], right subtree [25, 35, 30], then root
- Result: 5, 15, 10, 25, 35, 30, 20
Answer:
- In-order: 5, 10, 15, 20, 25, 30, 35
- Pre-order: 20, 10, 5, 15, 30, 25, 35
- Post-order: 5, 15, 10, 25, 35, 30, 20
4.9. Maximum and Minimum Node Count for a Binary Tree of Height \(h\) (Lecture 6, Task 6)
What is the maximum number of nodes in a binary tree of height \(h\)? What is the minimum? Justify briefly.
Click to see the solution
Key Concept: Think about what configurations of nodes are most/least efficient.
Maximum nodes:
A binary tree of height \(h\) is maximized when it is a perfect binary tree — every level is completely full. Level \(k\) (counting from 0) has \(2^k\) nodes. Summing all levels:
\[N_{\max}(h) = \sum_{k=0}^{h} 2^k = 2^{h+1} - 1\]
For example, a perfect binary tree of height 2 has \(2^3 - 1 = 7\) nodes.
Minimum nodes:
The minimum occurs when the tree has exactly one leaf at the deepest level — a chain or path of length \(h\). At each depth from 0 to \(h-1\), there is exactly one internal node with one child. At depth \(h\), there is one leaf:
\[N_{\min}(h) = h + 1\]
For example, the chain \(1 \to 2 \to 3\) has height 2 and \(3 = 2+1\) nodes.
Answer:
- Maximum: \(2^{h+1} - 1\) nodes (perfect binary tree)
- Minimum: \(h + 1\) nodes (a chain / path)
4.10. Tree Predecessor Pseudocode (Lecture 6, Task 7)
Write pseudocode for TREE-PREDECESSOR(x) (the predecessor of \(x\) in in-order traversal). What is its time complexity?
Click to see the solution
Key Concept: The predecessor is the mirror image of the successor. Instead of going right, we go left; instead of looking for the minimum, we look for the maximum.
The in-order predecessor of \(x\) is the node with the largest key smaller than \(x.\text{key}\).
TREE-PREDECESSOR(x):
if x.left ≠ NIL
return TREE-MAXIMUM(x.left) // rightmost node in left subtree
else
// Find the lowest ancestor of x whose right child is an ancestor of x
y = x.parent
while y ≠ NIL and x == y.left
x = y
y = y.parent
return y
Explanation of the two cases:
- If \(x\) has a left subtree: The predecessor is the maximum of that subtree (rightmost node). This is symmetric to finding the successor’s minimum in the right subtree.
- If \(x\) has no left subtree: We climb the tree looking for the first ancestor that we entered from the right side. That ancestor has a key smaller than \(x.\text{key}\), and it’s the closest such ancestor.
Time complexity: \(O(h)\), where \(h\) is the height of the tree. In the worst case, we walk from a leaf all the way up to the root.
Answer: The pseudocode above correctly computes the predecessor. Time complexity is \(O(h)\).
4.11. Prove AVL Tree Height is \(O(\log n)\) (Lecture 6, Task 8)
Prove that the height of an AVL tree with \(n\) nodes is \(O(\log n)\).
Click to see the solution
Key Concept: Instead of bounding height from above directly, we bound the minimum number of nodes needed for a given height from below, then invert.
Define: Let \(n_{\min}(h)\) = minimum number of nodes in an AVL tree of height \(h\).
Base cases:
- \(n_{\min}(-1) = 0\) (empty tree has height \(-1\) by convention)
- \(n_{\min}(0) = 1\) (a single node)
Recurrence: An AVL tree of height \(h\) must have a root, plus:
- One subtree of height \(h-1\) (to achieve height \(h\))
- One subtree of height \(\ge h-2\) (by the AVL balance property, heights can differ by at most 1)
The minimum is achieved when one subtree has height \(h-1\) and the other has height \(h-2\):
\[n_{\min}(h) = 1 + n_{\min}(h-1) + n_{\min}(h-2)\]
First few values:
| -1 |
0 |
| 0 |
1 |
| 1 |
2 |
| 2 |
4 |
| 3 |
7 |
| 4 |
12 |
Lower bound: We claim \(n_{\min}(h) \ge 2^{h/2}\). Proof by strong induction:
- Base: \(n_{\min}(0) = 1 \ge 2^0 = 1\) ✓, \(n_{\min}(1) = 2 \ge 2^{1/2} \approx 1.41\) ✓
- Step: \(n_{\min}(h) = 1 + n_{\min}(h-1) + n_{\min}(h-2) \ge n_{\min}(h-2) + n_{\min}(h-2) = 2 \cdot n_{\min}(h-2) \ge 2 \cdot 2^{(h-2)/2} = 2^{h/2}\).
Conclusion: Any AVL tree with \(n\) nodes and height \(h\) satisfies:
\[n \ge n_{\min}(h) \ge 2^{h/2}\]
Taking logarithms: \(h \le 2\log_2 n\), so \(h = O(\log n)\).
Answer: The height of an AVL tree with \(n\) nodes is at most \(2\log_2 n = O(\log n)\).
4.12. Prove Red-Black Tree Height is at most \(2\log_2(n+1)\) (Lecture 6, Task 9)
Prove that the height of a red-black tree with \(n\) internal nodes is at most \(2\log_2(n+1)\) (Cormen et al. 2022, Lemma 13.1).
Click to see the solution
Key Concept: Use the black-height and the constraint on red nodes to derive the bound.
Lemma: A subtree rooted at any node \(x\) contains at least \(2^{\text{bh}(x)} - 1\) internal nodes.
Proof by induction on height of \(x\):
- Base case: If \(x\) is a leaf (NIL), it has 0 internal nodes. \(\text{bh}(\text{NIL}) = 0\), and \(2^0 - 1 = 0\). ✓
- Inductive step: If \(x\) is an internal node with children \(l\) and \(r\):
- Each child has black-height \(\ge \text{bh}(x) - 1\) (they may be red, contributing 0, but the black nodes below them satisfy the black-height of \(x\) minus the contribution of \(x\) itself if black).
- By induction, each child’s subtree has \(\ge 2^{\text{bh}(x)-1} - 1\) internal nodes.
- Total for \(x\)’s subtree: \(1 + (2^{\text{bh}(x)-1} - 1) + (2^{\text{bh}(x)-1} - 1) = 2^{\text{bh}(x)} - 1\).
Applying to the root:
Let \(h\) = height of the tree and \(\text{bh}\) = black-height of the root.
By invariant 4 (no two consecutive red nodes), at least half of the nodes on any root-to-leaf path are black. Therefore:
\[\text{bh}(\text{root}) \ge h/2\]
From the lemma:
\[n \ge 2^{\text{bh}(\text{root})} - 1 \ge 2^{h/2} - 1\]
Solving for \(h\):
\[n + 1 \ge 2^{h/2} \implies h/2 \le \log_2(n+1) \implies h \le 2\log_2(n+1)\]
Answer: The height of a red-black tree with \(n\) internal nodes is at most \(2\log_2(n+1) = O(\log n)\).
4.13. Compare AVL and Red-Black Trees (Lecture 6, Task 10)
Compare AVL trees and red-black trees: when would you prefer one over the other?
Click to see the solution
Key Concept: Both structures guarantee \(O(\log n)\) operations, but they differ in their height constants and the cost of rebalancing.
Height comparison:
- AVL trees: height \(\le 1.44\log_2(n+1)\) — strictly tighter bound
- Red-black trees: height \(\le 2\log_2(n+1)\) — up to ~39% taller
Rotation cost comparison:
- Insert: Both require \(O(1)\) rotations (\(\le 2\) for AVL, \(\le 2\) for red-black)
- Delete: AVL requires up to \(O(\log n)\) rotations; red-black requires at most 3 rotations
Practical guidance:
| Read-heavy workload (many lookups, few modifications) |
AVL tree — lower height means fewer comparisons per lookup |
| Write-heavy workload (frequent insertions and deletions) |
Red-black tree — fewer rotations on delete, lower overhead |
| Mixed workload |
Red-black tree (used in most standard library implementations, e.g., Java TreeMap, C++ std::map) |
| Memory-constrained |
AVL tree (only needs height per node; red-black needs 1-bit color per node) |
Real-world use: Most language standard libraries (Java, C++, Linux kernel’s interval trees) use red-black trees because writes are common in practice and the simpler deletion fix (at most 3 rotations) outweighs the slightly taller tree.
Answer: Prefer AVL trees when lookups dominate; prefer red-black trees when insertions and deletions are frequent.
4.14. Asymptotic Complexity of Solve (Mock Midterm 2026, Question A)
Compute the asymptotic worst-case time complexity of Solve:
Solve(A, n):
return Helper(A, 1, n)
Helper(A, l, r):
if r - l <= 50
return l
else
low := l; high := r
while low < high:
mid := ⌊(low + high) / 2⌋
count := 0
for i from l to r:
if A[i] > A[mid]: count := count + 1
if count > (r - l + 1) / 2 then high := mid else low := mid + 1
k := ⌈(r - l + 1) / 3⌉
a := Helper(A, l, l + k - 1)
b := Helper(A, l + k, l + 2*k - 1)
c := Helper(A, l + 2*k, r)
return low
- Express \(T(n)\) as a recurrence relation.
- Find the asymptotic complexity using the Master Method, specifying the case and justification.
Click to see the solution
Step 1: Set up the recurrence.
Let \(n = r - l + 1\). Base case: if \(n \le 51\), return immediately.
For larger \(n\), set \(k = \lceil n/3 \rceil \approx n/3\).
The three recursive calls:
Helper(A, l, l + k - 1) — size \(k \approx n/3\)
Helper(A, l + k, l + 2k - 1) — size \(k \approx n/3\)
Helper(A, l + 2k, r) — size \(n - 2k \approx n/3\)
So there are 3 recursive calls, each on \(\approx n/3\) elements.
Non-recursive work: the while loop runs \(O(\log n)\) iterations (binary search). Each iteration has the inner for loop: \(\Theta(n)\). Total non-recursive work: \(\Theta(n \log n)\).
Recurrence:
\[T(n) = 3T(n/3) + \Theta(n \log n)\]
Step 2: Apply the Master Theorem.
\(a = 3\), \(b = 3\), \(f(n) = \Theta(n \log n)\).
\(n^{\log_b a} = n^{\log_3 3} = n^1 = n\).
Compare \(f(n) = n \log n\) with \(n^{\log_3 3} = n\):
\(f(n) = n \log n = n^1 \cdot \log n\).
This is the boundary case: \(f(n) = \Theta(n^{\log_b a} \cdot \log^k n)\) with \(k = 1\).
Case 2 of the Master Theorem (extended) applies with \(k = 1\):
\[T(n) = \Theta(n^{\log_b a} \cdot \log^{k+1} n) = \Theta(n \cdot \log^2 n)\]
\[\boxed{T(n) = \Theta(n \log^2 n)}\]
4.15. Asymptotic Comparison Table (Mock Midterm 2026, Question B)
Use the most precise asymptotic notation (\(O\), \(\Theta\), \(\Omega\)) or \(\times\) if none applies:
| 1 |
\(\sqrt{n} \cdot \log_2 n\) |
\(n^{3/4}\) |
| 2 |
\(n^{1.001}\) |
\(n \log^2 n\) |
| 3 |
\(\log_2(n!)\) |
\(n \log_2 n\) |
| 4 |
\(\sqrt[4]{n}\) |
\(\log_2 n\) |
| 5 |
\((1 + 2/n)^n\) |
\(n^2\) |
| 6 |
\(3^n + 2^n\) |
\(3^n\) |
| 7 |
\(n^2/\log n\) |
\(n\sqrt{n}\) |
| 8 |
\((n+1)!\) |
\(n!\) |
Click to see the solution
1. \(\sqrt{n} \cdot \log_2 n\) vs \(n^{3/4}\)
\(f(n) = n^{1/2} \log n\). Compare with \(g(n) = n^{3/4}\).
\(\frac{f(n)}{g(n)} = \frac{n^{1/2}\log n}{n^{3/4}} = \frac{\log n}{n^{1/4}} \to 0\) as \(n \to \infty\).
So \(f(n) = o(g(n))\).
Answer: \(f(n) = O(g(n))\)
2. \(n^{1.001}\) vs \(n \log^2 n\)
\(\frac{f(n)}{g(n)} = \frac{n^{1.001}}{n \log^2 n} = \frac{n^{0.001}}{\log^2 n} \to \infty\).
Polynomial beats polylog.
Answer: \(f(n) = \Omega(g(n))\)
3. \(\log_2(n!)\) vs \(n \log_2 n\)
By Stirling: \(\log(n!) = \Theta(n \log n)\). Both are \(\Theta(n \log n)\).
Answer: \(f(n) = \Theta(g(n))\)
4. \(\sqrt[4]{n} = n^{1/4}\) vs \(\log_2 n\)
Any positive power of \(n\) grows faster than \(\log n\).
\(\frac{n^{1/4}}{\log n} \to \infty\).
Answer: \(f(n) = \Omega(g(n))\)
5. \((1 + 2/n)^n\) vs \(n^2\)
\((1 + 2/n)^n \to e^2 \approx 7.389\) (constant). Meanwhile \(g(n) = n^2 \to \infty\).
Answer: \(f(n) = O(g(n))\)
6. \(3^n + 2^n\) vs \(3^n\)
\(3^n + 2^n = 3^n(1 + (2/3)^n)\). As \(n \to \infty\), \((2/3)^n \to 0\), so \(3^n + 2^n \sim 3^n\).
\(\frac{3^n + 2^n}{3^n} = 1 + (2/3)^n \to 1\).
Answer: \(f(n) = \Theta(g(n))\)
7. \(n^2/\log n\) vs \(n\sqrt{n} = n^{3/2}\)
\(\frac{f(n)}{g(n)} = \frac{n^2/\log n}{n^{3/2}} = \frac{n^{1/2}}{\log n} \to \infty\).
Answer: \(f(n) = \Omega(g(n))\)
8. \((n+1)!\) vs \(n!\)
\((n+1)! = (n+1) \cdot n!\), so \(\frac{(n+1)!}{n!} = n+1 \to \infty\).
Answer: \(f(n) = \Omega(g(n))\)
4.16. Brute-Force Subset Sum (Mock Midterm 2026, Question C)
Given a set \(A\) of \(n\) integers and a target \(k\), find a subset of \(A\) of maximum size whose elements sum to exactly \(k\). A brute-force algorithm enumerates all subsets.
- Give pseudocode for a brute-force algorithm.
- Worst-case time complexity in terms of \(n\).
- Brief justification.
- Worst-case space complexity (excluding input) using iterative enumeration.
Click to see the solution
Part 1: Pseudocode
BruteForceMaxSubset(A, n, k):
best := ∅
for mask from 0 to 2^n - 1:
S := ∅
total := 0
for i from 0 to n - 1:
if (mask >> i) & 1 == 1:
S := S ∪ {A[i]}
total := total + A[i]
if total == k and |S| > |best|:
best := S
return best
Part 2: Worst-case time complexity
\[\Theta(n \cdot 2^n)\]
Part 3: Justification
There are \(2^n\) subsets. For each subset (represented as a bitmask), we iterate through all \(n\) bits to reconstruct the subset and compute its sum. Each subset takes \(\Theta(n)\) time, giving total \(\Theta(n \cdot 2^n)\).
Part 4: Space complexity (iterative enumeration)
Using iterative enumeration with a bitmask, we only need:
- One variable for the current mask: \(O(1)\) (or \(O(n)\) bits)
- Variables for the current sum and size: \(O(1)\)
- The best subset found so far: \(O(n)\)
Space complexity: \(O(n)\) (excluding input).
4.17. Merging \(n\) Sorted Lists (Mock Midterm 2026, Question D)
\(n\) sorted lists contain \(m\) elements in total. Merge them into one sorted list of \(m\) elements.
Fill the table with worst-case time complexity depending on the representation of input lists and output list:
| Array |
? |
? |
? |
| Singly-linked (with tail) |
? |
? |
? |
| Doubly-linked |
? |
? |
? |
Click to see the solution
Merging Strategy:
Use a \(k\)-way merge: repeatedly extract the minimum element from the fronts of \(n\) lists using a min-heap. Total steps: \(m\) extractions, each costs \(O(\log n)\). So the merge itself takes \(O(m \log n)\).
Key operations:
- Reading front of a list: \(O(1)\) for all representations (array: index 0; singly-linked: head; doubly-linked: head).
- Removing front of a list: \(O(1)\) for all (array with pointer offset, linked list head removal).
- Appending to output:
- Array: \(O(1)\) amortized (dynamic array), but each element written: \(O(1)\).
- Singly-linked with tail: \(O(1)\) per append (use tail pointer).
- Doubly-linked: \(O(1)\) per append (use tail pointer).
Total complexity:
The merge algorithm itself is \(O(m \log n)\) regardless of representation (assuming efficient front access and removal). The output construction is \(O(m)\) for all output types.
However, if we use a naive approach (no heap, sequential scan of list heads): each of \(m\) elements requires scanning \(n\) list fronts, giving \(O(mn)\).
With a min-heap of size \(n\):
| Array |
\(O(m \log n)\) |
\(O(m \log n)\) |
\(O(m \log n)\) |
| Singly-linked (with tail) |
\(O(m \log n)\) |
\(O(m \log n)\) |
\(O(m \log n)\) |
| Doubly-linked |
\(O(m \log n)\) |
\(O(m \log n)\) |
\(O(m \log n)\) |
Answer: All cells are \(O(m \log n)\) when using a min-heap. The representation does not change the asymptotic complexity since all representations support \(O(1)\) front access and removal, and \(O(1)\) append to the output.
4.18. Hybrid Quicksort + Counting Sort (Mock Midterm 2026, Question E)
A hybrid sorting algorithm: run QUICK-SORT but stop when the subarray size is \(\le k\), then apply COUNTING-SORT to each small block (keys in \([0, R]\)), and concatenate the results.
- Worst-case time complexity in terms of \(n\), \(k\), \(R\).
- If \(k = \Theta(\log n)\) and \(R = O(n)\), what is the overall worst-case complexity?
Click to see the solution
Part 1: General complexity
QUICK-SORT phase: The standard QUICK-SORT worst case is \(O(n^2)\) regardless of cutoff \(k\) (with bad pivots, each of the first \(n - k\) levels does \(O(n)\) work). So the QUICK-SORT phase is \(O(n^2)\) worst case.
COUNTING-SORT phase: There are \(O(n/k)\) subarrays of size at most \(k\). Each COUNTING-SORT on a subarray of size \(s \le k\) with keys in \([0, R]\) takes \(O(s + R)\). Total: \(O(n + (n/k) \cdot R)\).
Concatenation: \(O(n)\).
Total worst case: \(O(n^2 + n + (n/k) \cdot R) = O(n^2 + nR/k)\).
\[\boxed{O(n^2 + nR/k)}\]
Part 2: With \(k = \Theta(\log n)\) and \(R = O(n)\)
\(nR/k = O(n \cdot n / \log n) = O(n^2 / \log n)\).
Total: \(O(n^2 + n^2/\log n) = O(n^2)\).
\[\boxed{O(n^2)}\]
Note: The hybrid approach doesn’t improve the worst case because QUICK-SORT still degenerates. In the average case, QUICK-SORT phase takes \(O(n \log n)\), giving total \(O(n \log n + nR/k) = O(n \log n + n \cdot n/\log n) = O(n^2/\log n)\).
4.19. Bucket Sort Application (Mock Midterm 2026, Question F)
Apply BUCKET-SORT to: \(0.92, 0.21, 0.03, 0.55, 0.07, 0.41, 0.25, 0.12, 0.67, 0.05\)
Use 10 buckets for ranges \([0.0, 0.1), [0.1, 0.2), \ldots, [0.9, 1.0]\).
- Contents of each bucket (before sorting inside).
- Sorted sequence.
Click to see the solution
Part 1: Bucket contents
| 0 |
\([0.0, 0.1)\) |
0.03, 0.07, 0.05 |
| 1 |
\([0.1, 0.2)\) |
0.12 |
| 2 |
\([0.2, 0.3)\) |
0.21, 0.25 |
| 3 |
\([0.3, 0.4)\) |
(empty) |
| 4 |
\([0.4, 0.5)\) |
0.41 |
| 5 |
\([0.5, 0.6)\) |
0.55 |
| 6 |
\([0.6, 0.7)\) |
0.67 |
| 7 |
\([0.7, 0.8)\) |
(empty) |
| 8 |
\([0.8, 0.9)\) |
(empty) |
| 9 |
\([0.9, 1.0]\) |
0.92 |
Part 2: Sort each bucket, concatenate
- Bucket 0: sort \(\{0.03, 0.07, 0.05\}\) → \(0.03, 0.05, 0.07\)
- Bucket 1: \(0.12\)
- Bucket 2: sort \(\{0.21, 0.25\}\) → \(0.21, 0.25\)
- Bucket 3: (empty)
- Bucket 4: \(0.41\)
- Bucket 5: \(0.55\)
- Bucket 6: \(0.67\)
- Bucket 7: (empty)
- Bucket 8: (empty)
- Bucket 9: \(0.92\)
Sorted sequence: \(0.03, 0.05, 0.07, 0.12, 0.21, 0.25, 0.41, 0.55, 0.67, 0.92\)
4.20. True or False (Mock Midterm 2026, Question G)
Determine TRUE or FALSE for each statement:
- \(n^2 = O(n^{3/2})\)
- In a BST, the node with maximum key has no left child
- HEAP-SORT has \(O(n)\) worst-case time complexity
- RADIX-SORT (fixed digit range) sorts \(n\) integers in \(\Theta(n)\) time regardless of number of digits
- In ARRAYQUEUE, ENQUEUE(x) has \(\Theta(n)\) worst-case time complexity
- Height of an AVL tree with \(n\) nodes is \(\Theta(\log n)\)
- For \(T(n) = 2T(n/2) + n\), master theorem gives \(T(n) = \Theta(n)\)
- In dynamic programming, overlapping subproblems are solved at least twice
- MERGE-SORT is a comparison-based sorting algorithm
- A red-black tree with \(n\) internal nodes has height at least \(2\log_2(n+1)\)
Click to see the solution
1. \(n^2 = O(n^{3/2})\) — FALSE.
\(O(n^{3/2})\) means the function is bounded above by \(c \cdot n^{3/2}\). But \(n^2 / n^{3/2} = n^{1/2} \to \infty\), so \(n^2\) is not \(O(n^{3/2})\).
2. In a BST, the node with maximum key has no left child — FALSE.
The node with maximum key has no right child (since no key is larger). It can have a left child. For example: insert 5, then 3 into a BST. Root is 5, left child is 3. Node 5 has maximum key and has a left child.
3. HEAP-SORT has \(O(n)\) worst-case time complexity — FALSE.
HEAP-SORT has \(\Theta(n \log n)\) worst-case time complexity.
4. RADIX-SORT sorts \(n\) integers in \(\Theta(n)\) time regardless of number of digits — FALSE.
RADIX-SORT with \(d\) digits and base \(b\) runs in \(\Theta(d(n + b))\). If \(d\) grows (e.g., \(d = \Theta(\log n)\)), the complexity is \(\Theta(n \log n)\), not \(\Theta(n)\).
5. In ARRAYQUEUE, ENQUEUE(x) has \(\Theta(n)\) worst-case time complexity — FALSE.
ENQUEUE in a circular array queue is \(O(1)\) amortized. With dynamic resizing, the worst case for a single ENQUEUE can be \(O(n)\) (due to copying), but this is typically counted as \(O(1)\) amortized. A well-implemented ARRAYQUEUE uses a fixed or circular buffer, giving \(O(1)\) worst case per ENQUEUE without resizing.
Depending on interpretation: if the array must resize, then TRUE; if using circular buffer with preallocated space, then FALSE.
FALSE (standard circular array implementation: \(O(1)\) worst case).
6. Height of an AVL tree with \(n\) nodes is \(\Theta(\log n)\) — TRUE.
By AVL property, height is \(O(\log n)\) and \(\Omega(\log n)\), so \(\Theta(\log n)\).
7. For \(T(n) = 2T(n/2) + n\), master theorem gives \(T(n) = \Theta(n)\) — FALSE.
\(a=2, b=2, f(n)=n, n^{\log_2 2} = n\). This is Case 2: \(T(n) = \Theta(n \log n)\), not \(\Theta(n)\).
8. In dynamic programming, overlapping subproblems are solved at least twice — FALSE.
The key benefit of DP is that overlapping subproblems are solved only once and their results are cached/reused. Without DP (plain recursion), they’d be solved multiple times.
9. MERGE-SORT is a comparison-based sorting algorithm — TRUE.
MERGE-SORT compares elements to determine order, so yes, it is comparison-based.
10. A red-black tree with \(n\) internal nodes has height at least \(2\log_2(n+1)\) — FALSE.
The known bound is height \(\le 2\log_2(n+1)\) (upper bound). A red-black tree can have height as small as \(\log_2(n+1)\) (a perfect binary tree). So the height is at most \(2\log_2(n+1)\), not at least.
4.21. BST Construction and Deletions (Mock Midterm 2026, Question H)
Start from an empty BST and insert keys in order: 15, 8, 22, 4, 11, 19, 30, 2, 6, 25.
Use array representation with BFS indexing (root at 0, left child of \(i\) at \(2i+1\), right child at \(2i+2\)).
- Draw the constructed BST.
- Fill the array representation (indices 0–15) after all insertions.
- Array after deleting key 30.
- Array after deleting key 8 (from tree that already had 30 deleted).
- Draw the final BST.
Click to see the solution
Part 1: Constructing the BST
Insert in order: 15, 8, 22, 4, 11, 19, 30, 2, 6, 25.
15
/ \
8 22
/ \ / \
4 11 19 30
/ \ /
2 6 25
Part 2: Array representation (BFS order)
| key |
15 |
8 |
22 |
4 |
11 |
19 |
30 |
2 |
6 |
– |
– |
– |
– |
25 |
– |
– |
Verification of indices:
- 15 at idx 0
- 8 at idx 1 (left of 15), 22 at idx 2 (right of 15)
- 4 at idx 3 (left of 8), 11 at idx 4 (right of 8)
- 19 at idx 5 (left of 22), 30 at idx 6 (right of 22)
- 2 at idx 7 (left of 4), 6 at idx 8 (right of 4)
- 25: left of 30. Left of idx 6 is \(2 \times 6 + 1 = 13\). So 25 at idx 13.
Part 3: Delete key 30
Node 30 is at idx 6. It has one child: 25 (at idx 13). Replace 30 with its only child 25. Move 25 to idx 6, clear idx 13.
| key |
15 |
8 |
22 |
4 |
11 |
19 |
25 |
2 |
6 |
– |
– |
– |
– |
– |
– |
– |
Part 4: Delete key 8 (from tree after deleting 30)
Current tree:
15
/ \
8 22
/ \ / \
4 11 19 25
/ \
2 6
Node 8 is at idx 1. It has two children: 4 (idx 3) and 11 (idx 4).
In-order successor of 8: go right to 11 (idx 4), then leftmost — 11 has no left child, so successor is 11.
Replace key at idx 1 with 11. Remove 11 from idx 4.
| key |
15 |
11 |
22 |
4 |
– |
19 |
25 |
2 |
6 |
– |
– |
– |
– |
– |
– |
– |
Part 5: Final BST
15
/ \
11 22
/ / \
4 19 25
/ \
2 6
4.22. LCS Problem (Mock Midterm 2026, Question I)
Longest Common Subsequence (LCS) problem. Given sequences \(X\) and \(Y\):
- Worst-case time and space complexity using DP with tabulation (\(|X| = m\), \(|Y| = n\)).
- Find the LCS length for \(X = \langle A,B,C,A,B \rangle\) and \(Y = \langle B,A,C,B,A \rangle\). Give one LCS.
- Fill the DP table \(C[i,j]\) (LCS length of \(X[1..i]\) and \(Y[1..j]\)).
Click to see the solution
Part 1: Complexity
The DP table has \((m+1) \times (n+1)\) entries, each computed in \(O(1)\).
- Time complexity: \(\Theta(mn)\)
- Space complexity: \(\Theta(mn)\) (for the full table); \(\Theta(\min(m,n))\) with space optimization.
Part 2 & 3: DP Table
\(X = \langle A,B,C,A,B \rangle\) (rows), \(Y = \langle B,A,C,B,A \rangle\) (columns).
Recurrence: \[C[i,j] = \begin{cases} 0 & \text{if } i=0 \text{ or } j=0 \\ C[i-1,j-1]+1 & \text{if } X[i]=Y[j] \\ \max(C[i-1,j], C[i,j-1]) & \text{otherwise} \end{cases}\]
|
0 |
1 |
2 |
3 |
4 |
5 |
| A (1) |
0 |
0 |
1 |
1 |
1 |
1 |
| B (2) |
0 |
1 |
1 |
1 |
2 |
2 |
| C (3) |
0 |
1 |
1 |
2 |
2 |
2 |
| A (4) |
0 |
1 |
2 |
2 |
2 |
3 |
| B (5) |
0 |
1 |
2 |
2 |
3 |
3 |
Full table \(C[i,j]\) for \(i=0..5\), \(j=0..5\):
| 0 |
0 |
0 |
0 |
0 |
0 |
0 |
| 1 (A) |
0 |
0 |
1 |
1 |
1 |
1 |
| 2 (B) |
0 |
1 |
1 |
1 |
2 |
2 |
| 3 (C) |
0 |
1 |
1 |
2 |
2 |
2 |
| 4 (A) |
0 |
1 |
2 |
2 |
2 |
3 |
| 5 (B) |
0 |
1 |
2 |
2 |
3 |
3 |
LCS length: 3.
One LCS: Trace back from \(C[5,5]=3\):
- \(C[5,5]=3\): \(X[5]=B \ne Y[5]=A\), go to \(\max(C[4,5], C[5,4]) = \max(3,3)\), go up to \(C[4,5]\).
- \(C[4,5]=3\): \(X[4]=A = Y[5]=A\) ✓, take this match. Go to \(C[3,4]=2\).
- \(C[3,4]=2\): \(X[3]=C \ne Y[4]=B\), go to \(\max(C[2,4], C[3,3]) = \max(2,2)\), go up to \(C[2,4]\).
- \(C[2,4]=2\): \(X[2]=B = Y[4]=B\) ✓, take this match. Go to \(C[1,3]=1\).
- \(C[1,3]=1\): \(X[1]=A \ne Y[3]=C\), go to \(\max(C[0,3], C[1,2]) = \max(0,1)\), go to \(C[1,2]\).
- \(C[1,2]=1\): \(X[1]=A = Y[2]=A\) ✓, take this match. Go to \(C[0,1]=0\). Done.
LCS: \(\langle A, B, A \rangle\)
Answer: LCS length is 3, one LCS is \(\langle A, B, A \rangle\).
4.23. AVL Tree Rotation (Mock Midterm 2026, Question J)
AVL tree stored in array representation (indices 0–15):
Insert key 3 using BST insertion (without rebalancing). Which rotation(s) restore the AVL invariant? Give the pair (parent, child) for each rotation.
Click to see the solution
Step 1: Reconstruct the AVL tree.
BFS indexing:
- idx 0: key 7 (root)
- idx 1: key 4 (left of 7)
- idx 2: key 9 (right of 7)
- idx 3: key 1 (left of 4)
- idx 4: key 5 (right of 4)
- idx 5: key 8 (left of 9)
- idx 6: key 10 (right of 9)
- idx 7: – (left of 1, empty)
- idx 8: key 2 (right of 1)
Tree structure:
7
/ \
4 9
/ \ / \
1 5 8 10
\
2
Heights: node 2 → h=0, node 1 → h=1, node 5 → h=0, node 4 → h=2, node 8 → h=0, node 10 → h=0, node 9 → h=1, node 7 → h=3.
This is a valid AVL tree (balance factors all in \(\{-1, 0, 1\}\)).
Step 2: Insert key 3.
Path: 3 > 7? No → go left to 4. 3 < 4? Yes → go left to 1. 3 > 1? Yes → go right to 2. 3 > 2? Yes → insert as right child of 2.
Right child of 2: idx \(2 \times 8 + 2 = 18\) — exceeds array bounds of 0–15. In abstract tree terms:
7
/ \
4 9
/ \ / \
1 5 8 10
\
2
\
3
Step 3: Check AVL invariant after insertion.
Update heights bottom-up:
- node 3: h=0
- node 2: h=1 (right child 3 has h=0, no left child)
- node 1: left h=-1 (empty), right h=1 → balance factor = \(-1 - 1 = -2\). AVL violated at node 1!
Balance factor of node 1: right subtree height (1) − left subtree height (−1) = 2. Right-heavy.
Node 2 (right child of 1): right subtree height (0) − left subtree height (−1) = 1. Right-heavy.
Case: Right-Right → single Left rotation.
Step 4: Rotation.
Apply Left rotation at node 1 with node 2 as the new root of this subtree.
- Rotation pair: (parent=1, child=2)
After rotation:
node 2 becomes the subtree root
node 1 becomes left child of 2
node 3 remains right child of 2
Subtree rooted at 2:
2
/ \
1 3
Full tree after rotation:
7
/ \
4 9
/ \ / \
2 5 8 10
/ \
1 3
Answer: One Left rotation at node 1 with child 2. Pair: (parent=1, child=2).
After rotation, the AVL invariant is restored (all balance factors become 0 or ±1).
4.24. Maximum Subarray Problem (Mock Midterm 2026, Question K)
For the Maximum Subarray problem (find contiguous subarray with maximum sum), state the worst-case time complexity and brief justification for each approach:
- Brute Force
- Divide-and-Conquer
- Dynamic Programming (Kadane’s Algorithm)
Click to see the solution
1. Brute Force
Complexity: \(\Theta(n^2)\)
Justification: Enumerate all \(O(n^2)\) pairs \((i, j)\) with \(i \le j\). For each pair, sum the subarray \(A[i..j]\). Using prefix sums, each sum is \(O(1)\), giving total \(O(n^2)\). (Without prefix sums: \(O(n^3)\).)
2. Divide-and-Conquer
Complexity: \(\Theta(n \log n)\)
Justification: Divide the array in half. The maximum subarray either lies entirely in the left half, entirely in the right half, or crosses the midpoint. The crossing case is solved in \(O(n)\) (scan left from mid and right from mid). Recurrence: \(T(n) = 2T(n/2) + O(n)\), which gives \(T(n) = \Theta(n \log n)\) by the Master Theorem (Case 2).
3. Dynamic Programming (Kadane’s Algorithm)
Complexity: \(\Theta(n)\)
Justification: Maintain a running variable \(\text{maxEndingHere}\): for each position \(i\), compute the maximum subarray sum ending at \(i\). This is either \(A[i]\) alone or \(A[i] + \text{maxEndingHere}(i-1)\). One pass through the array suffices. Each element is processed in \(O(1)\), so total is \(\Theta(n)\).
4.25. Decision Tree Lower Bound (Mock Midterm 2026, Question L)
- What is the minimum number of leaves in a decision tree for any comparison-based sorting algorithm on an input of size 6?
- What is the lower bound on the number of comparisons for sorting \(n\) elements asymptotically?
Click to see the solution
Part 1: Minimum leaves for \(n = 6\)
A comparison-based sorting algorithm must be able to distinguish all \(n!\) permutations of the input. The decision tree must have at least \(n!\) leaves (one per possible permutation, since each permutation corresponds to a different sorted order).
For \(n = 6\):
\[6! = 720\]
Minimum number of leaves: \(\mathbf{720}\)
Part 2: Asymptotic lower bound
The height of the decision tree (= number of comparisons in the worst case) satisfies:
\[h \ge \log_2(n!) = \Theta(n \log n)\]
By Stirling’s approximation: \(\log_2(n!) = n\log_2 n - n\log_2 e + O(\log n) = \Theta(n \log n)\).
Lower bound: \(\Omega(n \log n)\) comparisons.
Any comparison-based sorting algorithm requires at least \(\Omega(n \log n)\) comparisons in the worst case.
4.26. Minimum Nodes in AVL Tree (Mock Midterm 2026, Question M)
- Find the minimum number \(N(h)\) of internal nodes in an AVL tree of height \(h = 4\), \(h = 8\), \(h = 12\).
- Draw a valid AVL tree of height 4 with the minimum number of nodes.
Click to see the solution
Part 1: \(N(h)\) — Fibonacci-like recurrence
The minimum number of nodes in an AVL tree of height \(h\) follows: \[N(h) = N(h-1) + N(h-2) + 1, \quad N(0) = 1, \quad N(1) = 2\]
Compute:
- \(N(0) = 1\)
- \(N(1) = 2\)
- \(N(2) = N(1) + N(0) + 1 = 2 + 1 + 1 = 4\)
- \(N(3) = N(2) + N(1) + 1 = 4 + 2 + 1 = 7\)
- \(N(4) = N(3) + N(2) + 1 = 7 + 4 + 1 = \mathbf{12}\)
- \(N(5) = N(4) + N(3) + 1 = 12 + 7 + 1 = 20\)
- \(N(6) = N(5) + N(4) + 1 = 20 + 12 + 1 = 33\)
- \(N(7) = N(6) + N(5) + 1 = 33 + 20 + 1 = 54\)
- \(N(8) = N(7) + N(6) + 1 = 54 + 33 + 1 = \mathbf{88}\)
- \(N(9) = N(8) + N(7) + 1 = 88 + 54 + 1 = 143\)
- \(N(10) = N(9) + N(8) + 1 = 143 + 88 + 1 = 232\)
- \(N(11) = N(10) + N(9) + 1 = 232 + 143 + 1 = 376\)
- \(N(12) = N(11) + N(10) + 1 = 376 + 232 + 1 = \mathbf{609}\)
Answers:
- \(N(4) = 12\)
- \(N(8) = 88\)
- \(N(12) = 609\)
Part 2: AVL tree of height 4 with 12 nodes
The minimum AVL tree of height \(h\) is built recursively: the root has one subtree of height \(h-1\) and one of height \(h-2\) (both also minimum AVL trees).
r
/ \
T3 T2
where \(T3\) = min AVL of height 3 (7 nodes) and \(T2\) = min AVL of height 2 (4 nodes).
Min AVL of height 4 has 12 nodes: root + left subtree (min AVL h=3, 7 nodes) + right subtree (min AVL h=2, 4 nodes). Total: \(N(4) = 1 + N(3) + N(2) = 1 + 7 + 4 = 12\) ✓
Concrete AVL tree of height 4 with 12 nodes:
10
/ \
5 15
/ \ /
3 7 13
/ / \ / \
2 6 8 12 14
/
1
This has 12 nodes. Heights: node 1 → 0, node 2 → 1, node 6 → 0, node 8 → 0, node 7 → 1, node 3 → 2, node 5 → 3, node 12 → 0, node 14 → 0, node 13 → 1, node 15 → 2, node 10 → 4. ✓
4.27. Red-Black Tree Construction (Mock Midterm 2026, Question N)
Draw a valid red-black tree with keys 10, 5, 15, 3, 7, 12, 20 inserted in this order. Mark each node red or black.
Click to see the solution
Red-Black Tree Properties: 1. Every node is red or black. 2. The root is black. 3. Every leaf (NIL) is black. 4. If a node is red, both children are black. 5. All paths from any node to its NIL descendants have the same number of black nodes.
Step-by-step insertion:
Insert 10: Root is black.
[10B]
Insert 5: 5 < 10, goes left. New node is red.
[10B]
/
[5R]
No violation.
Insert 15: 15 > 10, goes right. New node is red.
[10B]
/ \
[5R] [15R]
No violation.
Insert 3: 3 < 10, 3 < 5, goes left of 5. New node is red. Parent (5) is red — violation!
Uncle is 15 (red) → Case 1: recolor. Recolor parent (5) and uncle (15) to black, grandparent (10) to red. But 10 is root → recolor back to black.
[10B]
/ \
[5B] [15B]
/
[3R]
No violation.
Insert 7: 7 < 10, 7 > 5, goes right of 5. New node is red. Parent (5) is black → no violation.
[10B]
/ \
[5B] [15B]
/ \
[3R] [7R]
Insert 12: 12 > 10, 12 < 15, goes left of 15. New node is red. Parent (15) is black → no violation.
[10B]
/ \
[5B] [15B]
/ \ /
[3R] [7R][12R]
Insert 20: 20 > 10, 20 > 15, goes right of 15. New node is red. Parent (15) is black → no violation.
[10B]
/ \
[5B] [15B]
/ \ / \
[3R] [7R][12R][20R]
Final Red-Black Tree:
10 (Black)
/ \
5 (Black) 15 (Black)
/ \ / \
3 (Red) 7 (Red) 12 (Red) 20 (Red)
Verification:
- Root (10) is black ✓
- All red nodes (3, 7, 12, 20) have black parents ✓
- Black-height from root to any NIL: all paths pass through 2 black nodes (root + one of 5 or 15) → black-height = 2 ✓
- All NIL leaves are black (implicit) ✓
Answer: The tree is valid. Keys 3, 7, 12, 20 are Red; keys 5, 10, 15 are Black.
4.28. Recurrence Relation and Master Theorem (Midterm 2025, Question A)
Compute the asymptotic worst-case time complexity of the following solve procedure:
solve(A, n):
return helper(A, 0, n - 1)
helper(A, l, r):
if r - l <= 1
return 1
else
k := ceil((r - l + 1) / 9)
a = helper(A, l, min(l + 3*k, r))
b = helper(A, l + 2*k, min(l + 5*k, r))
c = helper(A, l + 4*k, min(l + 7*k, r))
d = helper(A, l + 6*k, r)
for i from l to r:
for j from l to r:
if A[i] > A[j]:
exchange A[i] with A[j]
return a
- Express the running time \(T(n)\) as a recurrence relation.
- Find the asymptotic complexity using the master method.
Click to see the solution
Key Concept: Identify the number of recursive calls, their sizes, and the non-recursive work per call.
Part 1 — Recurrence:
The input size is \(n = r - l + 1\). Setting \(k = \lceil n/9 \rceil \approx n/9\):
helper(A, l, l + 3k) works on approximately \(3k \approx n/3\) elements
helper(A, l+2k, l+5k) works on approximately \(3k \approx n/3\) elements
helper(A, l+4k, l+7k) works on approximately \(3k \approx n/3\) elements
helper(A, l+6k, r) works on approximately \(3k \approx n/3\) elements
There are 4 recursive calls, each of size \(\approx n/3\).
The double loop runs in \(\Theta(n^2)\) (the range \([l, r]\) has \(n\) elements).
\[T(n) = 4T(n/3) + \Theta(n^2)\]
Part 2 — Master theorem:
Parameters: \(a = 4\), \(b = 3\), \(f(n) = \Theta(n^2)\).
Compute \(n^{\log_b a} = n^{\log_3 4} \approx n^{1.26}\).
Check Case 3: \(f(n) = n^2 = \Omega(n^{\log_3 4 + \epsilon})\) for \(\epsilon = 2 - \log_3 4 \approx 0.74 > 0\). ✓
Regularity: \(a \cdot f(n/b) = 4 \cdot (n/3)^2 = 4n^2/9 \le (4/9) \cdot f(n)\) with \(c = 4/9 < 1\). ✓
Case 3 applies: \(T(n) = \Theta(f(n)) = \Theta(n^2)\).
Answer: \(T(n) = 4T(n/3) + \Theta(n^2)\). By Master Theorem Case 3, \(T(n) = \Theta(n^2)\).
4.29. Radix Sort with Different Inner Sorts (Midterm 2025, Question B)
Consider sorting \(p\) phrases using RADIX-SORT, treating each word as a “digit”:
- Each phrase is a list of at most \(w\) words.
- Each word is a list of at most \(s\) symbols.
- Each symbol comes from an alphabet of size \(a\).
What is the worst-case time complexity for sorting \(p\) phrases if words are sorted using:
- INSERTION-SORT
- COUNTING-SORT
- RADIX-SORT (treating each symbol as a digit of the word)
Click to see the solution
Key Concept: RADIX-SORT on phrases makes \(w\) passes, one per word-position. Each pass sorts \(p\) phrases by one word-digit using a stable sort of words.
Case 1 — Words sorted by INSERTION-SORT:
Comparing two words takes \(O(s)\) (up to \(s\) character comparisons). INSERTION-SORT on \(p\) items makes \(O(p^2)\) comparisons.
One pass: \(O(p^2 \cdot s)\). Total for \(w\) passes: \(\mathbf{O(w \cdot p^2 \cdot s)}\).
Case 2 — Words sorted by COUNTING-SORT:
Map each word to an integer key in \([0, a^s)\). COUNTING-SORT with range \(a^s\) on \(p\) items: \(O(p + a^s)\).
Total for \(w\) passes: \(\mathbf{O(w \cdot (p + a^s))}\).
Case 3 — Words sorted by RADIX-SORT (each symbol as a digit):
RADIX-SORT on \(p\) words of length \(s\) from alphabet of size \(a\): \(s\) passes each costing \(O(p + a)\). One word-sort: \(O(s(p + a))\).
Total for \(w\) outer passes: \(\mathbf{O(w \cdot s \cdot (p + a))}\).
Answer: 1. \(O(w \cdot p^2 \cdot s)\) 2. \(O(w \cdot (p + a^s))\) 3. \(O(w \cdot s \cdot (p + a))\)
4.30. List Intersection Complexity (Midterm 2025, Question C)
Consider the intersection of \(k\) non-empty sorted lists of integers with \(m\) total elements. For each representation, determine whether an \(O(mk)\) algorithm is possible:
- Dynamic Array List
- Singly-Linked List (no tail pointer)
- Doubly-Linked List
Click to see the solution
Key Concept: The algorithm compares the fronts of all \(k\) lists and advances the minimum-front list. Each step costs \(O(k)\) comparisons plus \(O(1)\) advancement per element.
Case 1 — Dynamic Array: Maintain current index per list; advancing is \(O(1)\). Total: \(O(mk)\). YES.
Case 2 — Singly-Linked List: Maintain current-node pointer per list; advancing via next is \(O(1)\). Total: \(O(mk)\). YES.
Case 3 — Doubly-Linked List: Same as singly-linked; next pointer gives \(O(1)\) advancement. Total: \(O(mk)\). YES.
Answer: All three representations support an \(O(mk)\) intersection algorithm.
- Dynamic Array: YES
- Singly-Linked List: YES
- Doubly-Linked List: YES
4.31. Asymptotic Comparisons Table (Midterm 2025, Question D)
For each pair \((A, B)\) below, determine whether \(A = O(B)\), \(A = o(B)\), \(A = \Omega(B)\), \(A = \omega(B)\), \(A = \Theta(B)\):
| \(n^n\) |
\(n!\) |
| \(n^{2025/2024}\) |
\((2025/2024)^n\) |
| \(\log^{2025} n\) |
\(n^{1/2025}\) |
| \(n \log_2 n\) |
\(n^2 / \log_2 n\) |
| \(\log_2(n^n)\) |
\(\log_2(n!)\) |
Click to see the solution
Key Concept: Use growth hierarchies: \(\log^k n \ll n^\epsilon \ll n^c \ll c^n \ll n^n\). Apply Stirling’s approximation \(n! \approx (n/e)^n \sqrt{2\pi n}\).
Row 1: \(n^n\) vs \(n!\)
By Stirling: \(n^n / n! \approx n^n / (n/e)^n = e^n \to \infty\). So \(n^n = \omega(n!)\).
\(O\): NO, \(o\): NO, \(\Omega\): YES, \(\omega\): YES, \(\Theta\): NO.
Row 2: \(n^{2025/2024}\) vs \((2025/2024)^n\)
Any polynomial is \(o\) of any exponential: \(n^c = o(r^n)\) for \(r > 1\).
\(O\): YES, \(o\): YES, \(\Omega\): NO, \(\omega\): NO, \(\Theta\): NO.
Row 3: \(\log^{2025} n\) vs \(n^{1/2025}\)
Any power of \(\log n\) is \(o\) of any positive power of \(n\).
\(O\): YES, \(o\): YES, \(\Omega\): NO, \(\omega\): NO, \(\Theta\): NO.
Row 4: \(n \log_2 n\) vs \(n^2 / \log_2 n\)
Ratio: \((n \log n) / (n^2/\log n) = \log^2 n / n \to 0\).
\(O\): YES, \(o\): YES, \(\Omega\): NO, \(\omega\): NO, \(\Theta\): NO.
Row 5: \(\log_2(n^n)\) vs \(\log_2(n!)\)
\(\log_2(n^n) = n\log_2 n\). By Stirling: \(\log_2(n!) = n\log_2 n - n\log_2 e + O(\log n) \sim n\log_2 n\). So they are \(\Theta\) of each other.
\(O\): YES, \(o\): NO, \(\Omega\): YES, \(\omega\): NO, \(\Theta\): YES.
Answer table:
| \(n^n\) |
\(n!\) |
NO |
NO |
YES |
YES |
NO |
| \(n^{2025/2024}\) |
\((2025/2024)^n\) |
YES |
YES |
NO |
NO |
NO |
| \(\log^{2025} n\) |
\(n^{1/2025}\) |
YES |
YES |
NO |
NO |
NO |
| \(n \log_2 n\) |
\(n^2/\log_2 n\) |
YES |
YES |
NO |
NO |
NO |
| \(\log_2(n^n)\) |
\(\log_2(n!)\) |
YES |
NO |
YES |
NO |
YES |
4.32. Master Theorem Application (Midterm 2025, Question E)
Consider the recurrence \(T(n) = 64T(n/4) + 7n^3 + 49\). Which applies?
- \(T(n) = \Theta(n^3 \log n)\)
- \(T(n) = \Theta(n^4)\)
- Master theorem is not applicable
- None of the above
Justify your answer.
Click to see the solution
Parameters: \(a = 64\), \(b = 4\), \(f(n) = 7n^3 + 49 = \Theta(n^3)\).
Compute \(n^{\log_b a}\):
\[n^{\log_4 64} = n^{\log_4 4^3} = n^3\]
Compare: \(f(n) = \Theta(n^3) = \Theta(n^{\log_4 64} \cdot \log^0 n)\) → Case 2 with \(k = 0\).
\[T(n) = \Theta(n^3 \log^{0+1} n) = \Theta(n^3 \log n)\]
Answer: Option 1: \(T(n) = \Theta(n^3 \log n)\). Master Theorem Case 2 applies since \(f(n) = \Theta(n^{\log_4 64}) = \Theta(n^3)\).
4.33. Hash Table with Linear Probing (Midterm 2025, Question F)
Consider a hash table with 11 slots and \(h(k) = (k + 1) \bmod 11\) with linear probing:
| Key |
— |
23 |
— |
35 |
2 |
15 |
5 |
13 |
— |
— |
9 |
Answer YES/NO:
- Could key 9 have been inserted before 23?
- Could key 23 have been inserted before 2?
- Could key 13 have been inserted before 5?
- Could key 15 have been inserted before 35?
Click to see the solution
Home positions (where \(h(k)\) maps to):
- \(h(9) = 10\) → stored at 10 (home) ✓
- \(h(23) = 24 \bmod 11 = 2\) → stored at 1 (wrapped past 2,3,…,10,0 → all occupied at insertion time)
- \(h(35) = 36 \bmod 11 = 3\) → stored at 3 (home) ✓
- \(h(2) = 3 \bmod 11 = 3\) → stored at 4 (home 3 taken by 35 → probed to 4)
- \(h(15) = 16 \bmod 11 = 5\) → stored at 5 (home) ✓
- \(h(5) = 6 \bmod 11 = 6\) → stored at 6 (home) ✓
- \(h(13) = 14 \bmod 11 = 3\) → stored at 7 (home 3 taken by 35; 4 by 2; 5 by 15; 6 by 5; probed to 7)
Analysis:
- 9 before 23? Key 9 sits at its home (10). It doesn’t block any slot between index 2 and index 0 that 23 needs — it does occupy slot 10, which 23 needs to probe through (when wrapping). So 9 at 10 is required for 23 to end up at 1. Therefore, 9 must have been inserted before 23 (or at minimum was present when 23 was inserted). YES.
- 23 before 2? Key 2 needs home 3 to be occupied (by 35). Key 23 lives at slot 1 and has home 2, not 3. Inserting 23 before 2 does not affect whether slot 3 is available for 2; slot 3 must be occupied by 35 regardless. YES.
- 13 before 5? Key 13 is at slot 7, home 3. Linear probe: 3→4→5→6→7. For 13 to land at 7, slots 3,4,5,6 must all be occupied at time of 13’s insertion. Slot 6 is occupied by key 5 (home 6). So key 5 must be in slot 6 when 13 is inserted, meaning 5 was inserted before 13. Conclusion: 13 could not have been inserted before 5. NO.
- 15 before 35? Key 15 has home 5 and sits there. Key 35 has home 3. They don’t occupy each other’s probing paths. YES.
Answer: 1. YES, 2. YES, 3. NO, 4. YES.
4.34. True/False Statements (Midterm 2025, Question G)
For each statement, determine TRUE or FALSE:
- If \(\log(f(n)) = \Theta(\log(g(n)))\) then \(f(n) = \Theta(g(n))\).
- Incrementing a binary counter takes \(\Theta(1)\) amortized, but \(\Theta(\log k)\) worst case (where \(k\) is the number of digits).
- In a stack using dynamic arrays, PUSH has \(\Theta(n)\) worst-case time complexity.
- Inserting into a hash table with separate chaining is \(O(1 + \alpha)\) average case.
- Finding a value in a hash table with load factor \(\alpha\) is \(\Theta(1)\) worst case as long as \(\text{hash}(k)\) runs in \(\Theta(1)\).
- If the master theorem is not applicable, the algorithm can loop indefinitely.
- Dynamic Programming is an implementation in a dynamically-typed language.
- QUICK-SORT has \(\Theta(n \log n)\) worst-case time complexity.
- Sorting \(n\) real numbers uniformly distributed in \([0, k]\) using BUCKET-SORT takes \(\Theta(n)\) average case.
- PARTITION (used in QUICK-SORT) has quadratic worst-case time complexity.
Click to see the solution
1. FALSE. Counterexample: \(f(n) = n\), \(g(n) = n^2\). \(\log f = \Theta(\log g)\) (both are \(\Theta(\log n)\)), but \(f \ne \Theta(g)\).
2. FALSE. With \(k\) digits, the worst case (all 1s) flips all \(k\) bits: \(\Theta(k)\) worst case, not \(\Theta(\log k)\).
3. TRUE. When the array doubles, copying \(n\) elements takes \(\Theta(n)\) — this is the worst case for a single PUSH. The amortized cost is \(\Theta(1)\).
4. TRUE. Average chain length is \(\alpha\); insert requires hashing (\(O(1)\)) + appending (\(O(1)\) or \(O(\alpha)\) if checking duplicates). \(O(1 + \alpha)\) is correct.
5. FALSE. Worst case: all \(n\) keys collide into one chain. Lookup is \(\Theta(n)\) worst case regardless of hash computation speed.
6. FALSE. “Master theorem not applicable” means the recurrence doesn’t fit the required form; the algorithm still terminates. We simply use another analysis method.
7. FALSE. Dynamic Programming is an algorithmic design technique for optimization problems with overlapping subproblems. Language typing is irrelevant.
8. FALSE. QUICK-SORT is \(\Theta(n \log n)\) average case and \(\Theta(n^2)\) worst case (e.g., sorted input with last-element pivot).
9. TRUE. With uniform distribution over \(n\) buckets, expected elements per bucket is \(O(1)\). Each bucket is sorted in \(O(1)\) expected time. Total: \(\Theta(n)\) average case.
10. FALSE. PARTITION runs in \(\Theta(n)\) time — it makes one linear scan. It is QUICK-SORT as a whole that has \(\Theta(n^2)\) worst case.
Answer summary:
| 1 |
FALSE |
| 2 |
FALSE |
| 3 |
TRUE |
| 4 |
TRUE |
| 5 |
FALSE |
| 6 |
FALSE |
| 7 |
FALSE |
| 8 |
FALSE |
| 9 |
TRUE |
| 10 |
FALSE |
4.35. Rod Cutting with DP (Midterm 2025, Question H)
Answer the following about the Rod Cutting problem:
- Using DP with tabulation, what is the worst-case time complexity of finding the maximum revenue for a rod of length \(n\)?
- Using DP with memoization (finding both maximum revenue and cut positions), what is the worst-case time complexity?
- Find the maximum revenue for a rod of length 10 with prices:
| Price \(p_i\) |
2 |
3 |
9 |
12 |
13 |
13 |
14 |
24 |
25 |
28 |
- (+1% extra credit) Consider Rod Cutting with at most \(k\) cuts. Give an \(O(nk)\) pseudocode algorithm.
Click to see the solution
Part 1 — Tabulation time complexity:
Fill array \(r[0..n]\); for each \(j\) from 1 to \(n\), try all \(i\) from 1 to \(j\) cuts: \(\sum_{j=1}^{n} j = n(n+1)/2 = \Theta(n^2)\).
Answer: \(\Theta(n^2)\).
Part 2 — Memoization time complexity:
Same subproblems, same recurrence: \(\Theta(n^2)\).
Answer: \(\Theta(n^2)\).
Part 3 — Maximum revenue for \(n = 10\):
Recurrence: \(r[j] = \max_{1 \le i \le j}(p[i] + r[j-i])\), \(r[0] = 0\).
| 1 |
\(p[1] + r[0] = 2\) |
2 |
cut 1 |
| 2 |
\(\max(2+2, 3+0) = \max(4, 3)\) |
4 |
cut 1+1 |
| 3 |
\(\max(2+4, 3+2, 9+0) = \max(6,5,9)\) |
9 |
cut 3 |
| 4 |
\(\max(2+9, 3+4, 9+2, 12+0) = \max(11,7,11,12)\) |
12 |
cut 4 |
| 5 |
\(\max(2+12, 3+9, 9+4, 12+2, 13+0) = \max(14,12,13,14,13)\) |
14 |
cut 1+4 or 2+… |
| 6 |
\(\max(2+14, 3+12, 9+9, 12+4, 13+2, 13+0) = \max(16,15,18,16,15,13)\) |
18 |
cut 3+3 |
| 7 |
\(\max(2+18,3+14,9+12,12+9,13+4,13+2,14+0) = \max(20,17,21,21,17,15,14)\) |
21 |
cut 3+4 or 4+3 |
| 8 |
\(\max(2+21,3+18,9+14,12+12,13+9,13+4,...,24+0) = \max(23,21,23,24,22,...,24)\) |
24 |
cut 4+4 or 8 |
| 9 |
\(\max(2+24,3+21,...,9+12,12+9,...,25+0)\). Check \(p[3]+r[6]=9+18=27\), \(p[3]+r[6]=27\), \(p[6]+r[3]=13+9=22\). Also \(p[9]=25\). Best: \(\max(26,25,27,23,...,25)\) |
27 |
cut 3+3+3 |
| 10 |
\(\max(p[3]+r[7], p[4]+r[6], \ldots) = \max(9{+}21, 12{+}18, \ldots) = 30\) |
30 |
cut 3+7 or 4+6 |
Part 4 — At most \(k\) cuts:
Define \(dp[j][c]\) = maximum revenue from a rod of length \(j\) with at most \(c\) cuts allowed.
\[dp[j][c] = \max\left(p[j],\ \max_{1 \le i < j}(p[i] + dp[j-i][c-1])\right)\]
Base cases: \(dp[0][c] = 0\) for all \(c\); \(dp[j][0] = p[j]\) (no cuts allowed → sell the whole piece).
function ROD-CUTTING-K(p, n, k):
dp[0..n][0..k] ← 0
for j ← 1 to n:
dp[j][0] ← p[j] // no cuts: sell as length j
for c ← 1 to k:
for j ← 1 to n:
dp[j][c] ← p[j] // option: no cut
for i ← 1 to j-1:
dp[j][c] ← max(dp[j][c], p[i] + dp[j-i][c-1])
return dp[n][k]
Complexity: Three nested loops: \(k \times n \times n = O(n^2 k)\). To achieve \(O(nk)\), observe that we can reformulate: instead of “which first cut to make,” use a different DP where \(dp[j][c]\) = max revenue from rod of length \(j\) making exactly \(c\) cuts (then take max over \(c' \le k\)). But the standard \(O(n^2 k)\) version is more natural. An \(O(nk)\) version requires a different subproblem formulation or convex hull trick.
Answer: 1. \(\Theta(n^2)\) 2. \(\Theta(n^2)\) 3. Maximum revenue = 30 (e.g., cut into 3+7 or 4+6) 4. DP defined above runs in \(O(n^2 k)\); an \(O(nk)\) solution requires advanced techniques.
4.36. Asymptotic Complexity of Solve (Midterm 2026, Question A)
Compute the asymptotic worst-case time complexity of the SOLVE procedure:
Solve(A, n):
return Helper(A, 1, n)
Helper(A, l, r):
if r - l <= 100
return l
else
k := ⌈(r - l + 1) / 6⌉
a = Helper(A, l, l + 2*k) + Helper(A, l + k, l + 3*k)
b = Helper(A, l + 2*k, l + 4*k) + Helper(A, l + 3*k, l + 5*k)
c = Helper(A, l + 4*k, r)
low := l; high := r
while low < high:
mid := ⌊(low + high) / 2⌋
count := 0
for i from l to r:
if A[i] > A[mid]: count := count + 1
if count > (r - l + 1) / 2 then high := mid else low := mid + 1
return low
- Express the running time \(T(n)\) as a recurrence relation.
- Find the asymptotic complexity using the Master Method, specifying which case applies.
Click to see the solution
Step 1: Set up the recurrence.
Let \(n = r - l + 1\) be the size of the subarray. When \(n \le 101\), the function returns immediately: \(T(n) = \Theta(1)\).
For larger \(n\), set \(k = \lceil n/6 \rceil \approx n/6\).
Count the recursive calls:
Helper(A, l, l + 2*k) — subarray of size \(2k + 1 \approx n/3\)
Helper(A, l + k, l + 3*k) — subarray of size \(2k + 1 \approx n/3\)
Helper(A, l + 2*k, l + 4*k) — subarray of size \(2k + 1 \approx n/3\)
Helper(A, l + 3*k, l + 5*k) — subarray of size \(2k + 1 \approx n/3\)
Helper(A, l + 4*k, r) — subarray of size \(\approx n - 4k \approx n/3\)
So there are 5 recursive calls, each on a subarray of size \(\approx n/3\).
The non-recursive work: the while loop runs at most \(O(\log n)\) iterations (binary search on a range of size \(n\)). Each iteration runs the inner for loop over all \(n\) elements: \(\Theta(n)\) per iteration, so total non-recursive work is \(\Theta(n \log n)\).
Recurrence:
\[T(n) = 5T(n/3) + \Theta(n \log n)\]
Step 2: Apply the Master Theorem.
The Master Theorem for \(T(n) = aT(n/b) + f(n)\):
- \(a = 5\), \(b = 3\), so \(n^{\log_b a} = n^{\log_3 5}\).
- \(\log_3 5 \approx 1.465\).
- \(f(n) = \Theta(n \log n) = \Theta(n^1 \log n)\).
Compare \(f(n)\) with \(n^{\log_3 5} \approx n^{1.465}\):
Since \(n \log n = o(n^{1.465})\) (polynomial beats polylog), we have \(f(n) = O(n^{\log_3 5 - \varepsilon})\) for \(\varepsilon \approx 0.465 > 0\).
Case 1 of the Master Theorem applies.
\[\boxed{T(n) = \Theta(n^{\log_3 5})}\]
4.37. Asymptotic Comparison Table (Midterm 2026, Question B)
Use the most precise asymptotic notation (\(O\), \(\Theta\), \(\Omega\)) or \(\times\) if none applies, to express the relationship \(f(n) = [?](g(n))\):
| 1 |
\(2026^n\) |
\((n+2026)!\) |
| 2 |
\((2026/2025)^n\) |
\(n^{2026/2025}\) |
| 3 |
\((\log_2 n)^{2026}\) |
\(\sqrt[2026]{n}\) |
| 4 |
\(n \log_2 n\) |
\(n^2 / \log_2 n\) |
| 5 |
\(\log_2(n^n)\) |
\(\log_2(n!)\) |
| 6 |
\(\sqrt{n \log_2 n}\) |
\(\sqrt{n} \cdot \log_2 \sqrt{n}\) |
| 7 |
\((1 + 1/n)^n\) |
\(n\) |
| 8 |
\(2^n + 2^n\) |
\(4^n\) |
Click to see the solution
1. \(2026^n\) vs \((n+2026)!\)
\((n+2026)!\) grows much faster than any exponential \(c^n\). For large \(n\), \((n+2026)! \gg 2026^n\).
Answer: \(f(n) = O(g(n))\)
2. \((2026/2025)^n\) vs \(n^{2026/2025}\)
\((2026/2025)^n\) is exponential (base \(> 1\)), while \(n^{2026/2025}\) is polynomial. Exponential grows faster.
Answer: \(f(n) = \Omega(g(n))\)
3. \((\log_2 n)^{2026}\) vs \(\sqrt[2026]{n} = n^{1/2026}\)
Any positive power of \(n\) grows faster than any power of \(\log n\): \((\log n)^k = o(n^\varepsilon)\) for any \(\varepsilon > 0\).
Here \(\varepsilon = 1/2026 > 0\), so \((\log_2 n)^{2026} = o(n^{1/2026})\).
Answer: \(f(n) = O(g(n))\)
4. \(n \log_2 n\) vs \(n^2 / \log_2 n\)
\(\frac{g(n)}{f(n)} = \frac{n^2/\log n}{n \log n} = \frac{n}{\log^2 n} \to \infty\).
So \(f(n) = o(g(n))\).
Answer: \(f(n) = O(g(n))\)
5. \(\log_2(n^n)\) vs \(\log_2(n!)\)
\(\log_2(n^n) = n \log_2 n\).
By Stirling’s approximation: \(\log_2(n!) = \Theta(n \log n)\).
More precisely, \(n \log n - n/\ln 2 \le \log_2(n!) \le n \log n\), so both are \(\Theta(n \log n)\).
Answer: \(f(n) = \Theta(g(n))\)
6. \(\sqrt{n \log_2 n}\) vs \(\sqrt{n} \cdot \log_2 \sqrt{n}\)
\(\sqrt{n \log_2 n} = \sqrt{n} \cdot \sqrt{\log_2 n}\).
\(\sqrt{n} \cdot \log_2 \sqrt{n} = \sqrt{n} \cdot \frac{1}{2}\log_2 n\).
Ratio: \(\frac{\sqrt{n} \cdot \sqrt{\log n}}{\sqrt{n} \cdot \frac{1}{2}\log n} = \frac{\sqrt{\log n}}{\frac{1}{2}\log n} = \frac{2}{\sqrt{\log n}} \to 0\).
So \(f(n) = o(g(n))\).
Answer: \(f(n) = O(g(n))\)
7. \((1 + 1/n)^n\) vs \(n\)
\((1 + 1/n)^n \to e \approx 2.718\) as \(n \to \infty\) (converges to a constant). Meanwhile \(g(n) = n \to \infty\).
Answer: \(f(n) = O(g(n))\)
8. \(2^n + 2^n\) vs \(4^n\)
\(2^n + 2^n = 2 \cdot 2^n = 2^{n+1}\).
\(4^n = (2^2)^n = 2^{2n}\).
\(\frac{2^{2n}}{2^{n+1}} = 2^{n-1} \to \infty\), so \(f(n) = o(g(n))\).
Answer: \(f(n) = O(g(n))\)
4.38. Master Theorem Application (Midterm 2026, Question C)
Consider \(T(n) = 8T(n/3) + 5n^2 + 10\). Which of the following is correct?
- \(T(n) = \Theta(n^2 \log n)\)
- \(T(n) = \Theta(n^3)\)
- Master theorem not applicable
- None of the above
Answer and justify.
Click to see the solution
Answer: Option 4 — None of the above.
Apply the Master Theorem with \(a = 8\), \(b = 3\), \(f(n) = 5n^2 + 10 = \Theta(n^2)\).
Compute \(n^{\log_b a} = n^{\log_3 8}\).
\(\log_3 8 = \frac{\ln 8}{\ln 3} = \frac{3\ln 2}{\ln 3} \approx \frac{3 \times 0.693}{1.099} \approx 1.893\).
So \(n^{\log_3 8} \approx n^{1.893}\).
Compare \(f(n) = \Theta(n^2)\) with \(n^{1.893}\):
Since \(n^2 = \Omega(n^{1.893 + \varepsilon})\) for \(\varepsilon \approx 0.107 > 0\), Case 3 of the Master Theorem applies — provided the regularity condition holds.
Regularity condition: \(af(n/b) \le cf(n)\) for some \(c < 1\).
\(8 \cdot f(n/3) = 8 \cdot 5(n/3)^2 = 8 \cdot \frac{5n^2}{9} = \frac{40n^2}{9} \approx 4.44 \cdot n^2/1 \le c \cdot 5n^2\) requires \(c \ge 8/9 < 1\). ✓
So Case 3 applies and:
\[T(n) = \Theta(f(n)) = \Theta(n^2)\]
None of the listed options (1), (2), (3) is correct — the correct answer is \(T(n) = \Theta(n^2)\), which is option (4) None of the above.
4.39. Data Structure Complexity Table (Midterm 2026, Question D)
Collection \(A\) of \(n\) columns, each column \(C_i\) has \(n\) elements. Extend \(A\) with column \(C_{n+1}\) consisting of the diagonal elements \(C_1[1], C_2[2], \ldots, C_n[n]\).
Fill the table of asymptotic worst-case time complexity for creating \(C_{n+1}\) and appending it to \(A\):
| Array List |
Array List |
? |
| Array List |
Singly-Linked (no tail) |
? |
| Array List |
Singly-Linked (with tail) |
? |
| Singly-Linked (no tail) |
Array List |
? |
| Singly-Linked (no tail) |
Singly-Linked (no tail) |
? |
| Singly-Linked (no tail) |
Singly-Linked (with tail) |
? |
| Singly-Linked (with tail) |
Array List |
? |
| Singly-Linked (with tail) |
Singly-Linked (no tail) |
? |
| Singly-Linked (with tail) |
Singly-Linked (with tail) |
? |
Click to see the solution
Key operations:
- Access \(C_i\) (the \(i\)-th column of \(A\)): \(O(1)\) for Array List, \(O(i)\) for Singly-Linked (no tail) — must traverse to position \(i\).
- Access \(C_i[i]\) (the \(i\)-th element of \(C_i\)): \(O(1)\) for Array List, \(O(i)\) for Singly-Linked (any) — must traverse to position \(i\).
- Append \(C_{n+1}\) to \(A\): \(O(1)\) amortized for Array List (or \(O(n)\) worst case resize), \(O(n)\) for Singly-Linked (no tail) — must traverse to end, \(O(1)\) for Singly-Linked (with tail).
- Building \(C_{n+1}\): We retrieve elements \(C_1[1], C_2[2], \ldots, C_n[n]\). For each \(i\), we access column \(C_i\) then element \([i]\).
Cost of collecting all \(n\) diagonal elements:
- If \(A\) is an Array List: accessing \(C_i\) is \(O(1)\), so cost of accessing all \(C_i\) is \(O(n)\) total.
- If \(A\) is a Singly-Linked (no tail): accessing \(C_i\) costs \(O(i)\), total \(\sum_{i=1}^n O(i) = O(n^2)\).
- If \(A\) is a Singly-Linked (with tail): same as no-tail for random access — \(O(i)\) per access, total \(O(n^2)\).
Within each \(C_i\), accessing element at position \(i\):
- Array List: \(O(1)\) per element, \(O(n)\) total over all \(i\).
- Singly-Linked (no tail or with tail): \(O(i)\) per element, \(\sum_{i=1}^n O(i) = O(n^2)\) total.
Appending \(C_{n+1}\) to \(A\) (building \(C_{n+1}\) itself as a list of \(n\) elements):
- The new column must be built by inserting \(n\) elements. Cost \(O(n)\) regardless of representation.
- Appending the column pointer to \(A\): \(O(1)\) Array List (amortized), \(O(n)\) Singly-Linked no tail, \(O(1)\) Singly-Linked with tail.
Combined complexities (dominant term):
| Array List |
Array List |
\(O(n)\) |
| Array List |
Singly-Linked (no tail) |
\(O(n^2)\) |
| Array List |
Singly-Linked (with tail) |
\(O(n^2)\) |
| Singly-Linked (no tail) |
Array List |
\(O(n^2)\) (accessing \(A\)) + \(O(n)\) append = \(O(n^2)\) |
| Singly-Linked (no tail) |
Singly-Linked (no tail) |
\(O(n^2)\) |
| Singly-Linked (no tail) |
Singly-Linked (with tail) |
\(O(n^2)\) |
| Singly-Linked (with tail) |
Array List |
\(O(n^2)\) (accessing \(A\)) + \(O(1)\) append = \(O(n^2)\) |
| Singly-Linked (with tail) |
Singly-Linked (no tail) |
\(O(n^2)\) |
| Singly-Linked (with tail) |
Singly-Linked (with tail) |
\(O(n^2)\) |
Answer: Only when both \(A\) and \(C_i\) are Array Lists is the total \(O(n)\); all other combinations are \(O(n^2)\).
4.40. Radix Sort for Phrases (Midterm 2026, Question E)
\(p\) phrases are sorted using RADIX-SORT, treating each word as a “digit”:
- Each phrase has at most \(w\) words
- Each word has at most \(s\) symbols
- Each symbol is from an alphabet of size \(a\)
- Comparing two symbols takes \(\Theta(1)\)
Find the worst-case time complexity for sorting \(p\) phrases when the stable sort for words uses:
- INSERTION-SORT
- COUNTING-SORT
- QUICK-SORT
Click to see the solution
Setup:
RADIX-SORT processes \(w\) digit positions (one per word position in a phrase). At each position, we stably sort \(p\) phrases by the word at that position.
Comparing two words of at most \(s\) symbols costs \(O(s)\) (compare symbol by symbol, each \(\Theta(1)\)).
1. INSERTION-SORT for sorting words:
- Comparing two words: \(O(s)\).
- INSERTION-SORT on \(p\) items: \(O(p^2)\) comparisons worst case.
- Cost per RADIX-SORT pass: \(O(p^2 \cdot s)\).
- Total for \(w\) passes: \(O(w \cdot p^2 \cdot s)\).
\[\boxed{O(w \cdot s \cdot p^2)}\]
2. COUNTING-SORT for sorting words:
COUNTING-SORT treats each word as a key. However, words are strings — to use COUNTING-SORT, we’d need to sort by the full word as a key. With alphabet size \(a\) and word length \(s\), there are \(a^s\) possible words — the counting array would be huge.
Alternatively, COUNTING-SORT on words using the word as a composite key requires \(\Theta(p + a^s)\) per pass.
Total for \(w\) passes: \(O(w \cdot (p + a^s))\).
\[\boxed{O(w \cdot (p + a^s))}\]
3. QUICK-SORT for sorting words:
QUICK-SORT is not stable, so using it directly inside RADIX-SORT breaks the algorithm’s correctness. Assuming we use a stable variant or accept the instability issue:
- QUICK-SORT on \(p\) items: \(O(p^2)\) worst case.
- Each comparison: \(O(s)\).
- Cost per pass: \(O(p^2 \cdot s)\).
- Total: \(O(w \cdot s \cdot p^2)\).
Note: Since QUICK-SORT is not stable, RADIX-SORT with QUICK-SORT does not correctly sort in general. This is a fundamental issue.
\[\boxed{O(w \cdot s \cdot p^2)}\]
4.41. Counting Sort Application (Midterm 2026, Question F)
Apply COUNTING-SORT to the following array:
| data |
I |
E |
D |
R |
S |
A |
E |
I |
H |
N |
S |
N |
- Give the auxiliary array \(C\) after the cumulative (prefix-sum) step for \(k \in \{0,1,2,3,4\}\).
- Give the output array (sorted).
Click to see the solution
Step 1: Count occurrences.
Scan the key array: keys are \(3,1,2,0,2,1,4,0,2,1,0,3\).
Step 2: Cumulative sum (prefix sum).
\(C[k] = \sum_{j=0}^{k} \text{count}[j]\)
Answer to part 1: \(C = [3, 6, 9, 11, 12]\)
Step 3: Build output array (process input right to left for stability).
Output array of size 12, 1-indexed positions.
Process input from right to left:
| 12 |
3 |
N |
11 |
pos 11 |
10 |
| 11 |
0 |
S |
3 |
pos 3 |
2 |
| 10 |
1 |
N |
6 |
pos 6 |
5 |
| 9 |
2 |
H |
9 |
pos 9 |
8 |
| 8 |
0 |
I |
2 |
pos 2 |
1 |
| 7 |
4 |
E |
12 |
pos 12 |
11 |
| 6 |
1 |
A |
5 |
pos 5 |
4 |
| 5 |
2 |
S |
8 |
pos 8 |
7 |
Input (1-indexed):
| key |
3 |
1 |
2 |
0 |
2 |
1 |
4 |
0 |
2 |
1 |
0 |
3 |
| dat |
I |
E |
D |
R |
S |
A |
E |
I |
H |
N |
S |
N |
\(C\) after cumulative step: \(C[0]=3, C[1]=6, C[2]=9, C[3]=11, C[4]=12\).
Process right to left (positions 12 down to 1):
- pos=12: key=3, data=N → output[C[3]]=output[11]=N, C[3]=10
- pos=11: key=0, data=S → output[C[0]]=output[3]=S, C[0]=2
- pos=10: key=1, data=N → output[C[1]]=output[6]=N, C[1]=5
- pos=9: key=2, data=H → output[C[2]]=output[9]=H, C[2]=8
- pos=8: key=0, data=I → output[C[0]]=output[2]=I, C[0]=1
- pos=7: key=4, data=E → output[C[4]]=output[12]=E, C[4]=11
- pos=6: key=1, data=A → output[C[1]]=output[5]=A, C[1]=4
- pos=5: key=2, data=S → output[C[2]]=output[8]=S, C[2]=7
- pos=4: key=0, data=R → output[C[0]]=output[1]=R, C[0]=0
- pos=3: key=2, data=D → output[C[2]]=output[7]=D, C[2]=6
- pos=2: key=1, data=E → output[C[1]]=output[4]=E, C[1]=3
- pos=1: key=3, data=I → output[C[3]]=output[10]=I, C[3]=9
Output array:
| key |
0 |
0 |
0 |
1 |
1 |
1 |
2 |
2 |
2 |
3 |
3 |
4 |
| dat |
R |
I |
S |
E |
A |
N |
D |
S |
H |
I |
N |
E |
Answer to part 2: The sorted array is: \(R, I, S, E, A, N, D, S, H, I, N, E\) (keys: \(0,0,0,1,1,1,2,2,2,3,3,4\)).
Note: Reading the data in sorted order spells out RISE AND SHINE — a cute exam easter egg!
4.42. True or False (Midterm 2026, Question G)
Determine TRUE or FALSE for each statement:
- If \(\log(f(n)) = \Theta(\log(g(n)))\) then \(f(n) = \Theta(g(n))\)
- Incrementing a binary counter takes \(\Theta(\log k)\) worst-case time where \(k\) is the number of digits
- In ARRAYSTACK, PUSH(x) has \(\Theta(n)\) worst-case time complexity
- MERGE-SORT has \(\Theta(n \log n)\) worst-case time complexity
- COUNTING-SORT sorts \(n\) integers in \([0, n^2]\) in \(\Theta(n)\) worst-case time
- If the master theorem doesn’t apply, the algorithm may loop indefinitely
- Dynamic Programming involves a dynamically-typed programming language (e.g., Python)
- QUICK-SORT has \(\Theta(n \log n)\) worst-case time complexity
- In a BST, the node with minimum key cannot have a right child
- Height of a red-black tree with \(n\) internal nodes is \(\Theta(\log n)\)
Click to see the solution
1. FALSE.
Counterexample: \(f(n) = n^2\), \(g(n) = n^3\). Then \(\log f(n) = 2\log n\), \(\log g(n) = 3\log n\), so \(\log f(n) = \Theta(\log g(n))\). But \(f(n) = O(g(n))\), not \(\Theta(g(n))\).
2. TRUE.
In the worst case, incrementing a \(k\)-bit binary counter flips all \(k\) bits (e.g., \(0111\ldots1 \to 1000\ldots0\)), which takes \(\Theta(k)\) time. So worst case is \(\Theta(k)\)… but the question says \(\Theta(\log k)\), which is wrong — it should be \(\Theta(k)\).
FALSE. Worst case is \(\Theta(k)\), not \(\Theta(\log k)\).
3. FALSE.
ARRAYSTACK PUSH(x) appends to the end of an array. This is \(O(1)\) amortized (with dynamic resizing), and \(O(1)\) worst case if no resize is needed. Worst case with resizing is \(O(n)\), but typically we say PUSH is \(O(1)\) amortized. As a strict worst-case statement: if resize happens, it’s \(\Theta(n)\), so this is TRUE for worst case (a single PUSH can cost \(\Theta(n)\)).
TRUE (a single PUSH can take \(\Theta(n)\) in the worst case due to array resizing).
4. TRUE.
MERGE-SORT always divides the array in half and merges, giving \(T(n) = 2T(n/2) + \Theta(n)\), which solves to \(\Theta(n \log n)\) in all cases (best, average, worst).
5. FALSE.
COUNTING-SORT on \(n\) integers in \([0, n^2]\) requires an auxiliary array of size \(n^2 + 1\), so it takes \(\Theta(n + n^2) = \Theta(n^2)\) time, not \(\Theta(n)\).
6. FALSE.
If the master theorem doesn’t apply, it simply means the theorem cannot be used — the algorithm still terminates normally. We just need another method (e.g., substitution, recursion tree) to find the complexity.
7. FALSE.
Dynamic Programming (DP) is an algorithmic technique for solving problems with overlapping subproblems and optimal substructure. It has nothing to do with dynamically-typed programming languages.
8. FALSE.
QUICK-SORT has worst-case \(\Theta(n^2)\) time (when pivots are always the smallest or largest element). Its average-case is \(\Theta(n \log n)\).
9. FALSE.
The minimum node in a BST has no left child (no smaller key exists). It can have a right child. For example, a BST with root 5, left child 3, and 3 has a right child 4: node 3 is not the minimum (1 might not exist), but the minimum node can have a right child.
More precisely: the minimum node has no left child, but may have a right child.
10. TRUE.
A red-black tree with \(n\) internal nodes has height \(h \le 2\log_2(n+1)\), so \(h = O(\log n)\). Also \(h = \Omega(\log n)\) since it’s a binary tree. Therefore height is \(\Theta(\log n)\).
4.43. BST Array Representation (Midterm 2026, Question H)
A BST is stored in an array representation (indices 0–15). The initial tree \(T\):
| key |
8 |
4 |
12 |
2 |
6 |
10 |
– |
– |
– |
– |
– |
9 |
11 |
– |
– |
– |
The array uses level-order (BFS) indexing: root at index 0, left child of \(i\) at \(2i+1\), right child at \(2i+2\).
- Array after inserting 20, 5, 17 (in order) into initial tree \(T\)
- Array after deleting key 10 from initial tree \(T\)
- Array after deleting key 8 from initial tree \(T\)
Click to see the solution
Tree structure from initial array:
Index 0: 8 (root)
Index 1: 4 (left child of 8)
Index 2: 12 (right child of 8)
Index 3: 2 (left child of 4)
Index 4: 6 (right child of 4)
Index 5: 10 (left child of 12)
Index 6: – (right child of 12, empty)
Index 11: 9 (left child of 10, at 2*5+1=11)
Index 12: 11 (right child of 10, at 2*5+2=12)
Part 1: Insert 20, 5, 17
Insert 20: 20 > 8 → right (12) → 20 > 12 → right of 12 (index 6). Place 20 at index 6.
Insert 5: Path: 8→4→6→left. Left child of 6 (index 4) is index \(2 \times 4 + 1 = 9\). Place 5 at index 9.
Insert 17: Path: 8→12→20→left. Left child of 20 (index 6) is index \(2 \times 6 + 1 = 13\). Place 17 at index 13.
| key |
8 |
4 |
12 |
2 |
6 |
10 |
20 |
– |
– |
5 |
– |
9 |
11 |
17 |
– |
– |
Part 2: Delete key 10 from initial \(T\)
Node 10 is at index 5. It has two children: 9 (index 11) and 11 (index 12).
Standard BST deletion with two children: replace with in-order successor (smallest in right subtree). In-order successor of 10 is 11 (at index 12, no left child).
Replace key at index 5 with 11. Remove 11 from index 12.
| key |
8 |
4 |
12 |
2 |
6 |
11 |
– |
– |
– |
– |
– |
9 |
– |
– |
– |
– |
Part 3: Delete key 8 from initial \(T\)
Node 8 is at index 0 (root). It has two children: 4 (left, index 1) and 12 (right, index 2).
In-order successor of 8 is 9 (leftmost in right subtree: 12→10→9). Node 9 is at index 11, it has no children.
Replace root key with 9. Remove 9 from index 11.
| key |
9 |
4 |
12 |
2 |
6 |
10 |
– |
– |
– |
– |
– |
– |
11 |
– |
– |
– |
4.44. Rod Cutting Problem (Midterm 2026, Question I)
The Rod Cutting problem: given a rod of length \(n\) and a table of prices \(p_i\) for lengths \(i = 1, \ldots, n\), find the maximum revenue \(r_n\) obtainable by cutting the rod.
- Worst-case time complexity of:
- DP with tabulation (bottom-up)
- DP with memoization (top-down)
- Find the maximum revenue for a rod of length 10 with prices:
| price \(p_i\) |
1 |
5 |
5 |
8 |
10 |
14 |
20 |
18 |
22 |
25 |
Present DP table values \(r[0], r[1], \ldots, r[10]\).
Click to see the solution
Part 1: Time complexities
The recurrence is: \[r[n] = \max_{1 \le i \le n} (p_i + r[n-i])\]
- DP with tabulation (bottom-up): For each subproblem of size \(j\) from 1 to \(n\), we try all \(j\) cuts. Total work: \(\sum_{j=1}^{n} j = \frac{n(n+1)}{2} = \Theta(n^2)\).
\[\boxed{\Theta(n^2)}\]
- DP with memoization (top-down): Each subproblem is solved at most once. Same analysis applies: \(\Theta(n^2)\) in worst case.
\[\boxed{\Theta(n^2)}\]
Part 2: DP Table
\(r[0] = 0\) (empty rod, no revenue).
\[r[j] = \max_{1 \le i \le j}(p_i + r[j - i])\]
- \(r[0] = 0\)
- \(r[1] = p_1 + r[0] = 1 + 0 = 1\)
- \(r[2] = \max(p_1+r[1], p_2+r[0]) = \max(1+1, 5+0) = \max(2, 5) = 5\)
- \(r[3] = \max(p_1+r[2], p_2+r[1], p_3+r[0]) = \max(1+5, 5+1, 5+0) = \max(6,6,5) = 6\)
- \(r[4] = \max(p_1+r[3], p_2+r[2], p_3+r[1], p_4+r[0]) = \max(1+6, 5+5, 5+1, 8+0) = \max(7,10,6,8) = 10\)
- \(r[5] = \max(p_1+r[4], p_2+r[3], p_3+r[2], p_4+r[1], p_5+r[0])\) \(= \max(1+10, 5+6, 5+5, 8+1, 10+0) = \max(11,11,10,9,10) = 11\)
- \(r[6] = \max(p_1+r[5], p_2+r[4], p_3+r[3], p_4+r[2], p_5+r[1], p_6+r[0])\) \(= \max(1+11, 5+10, 5+6, 8+5, 10+1, 14+0) = \max(12,15,11,13,11,14) = 15\)
- \(r[7] = \max(p_1+r[6], p_2+r[5], p_3+r[4], p_4+r[3], p_5+r[2], p_6+r[1], p_7+r[0])\) \(= \max(1+15, 5+11, 5+10, 8+6, 10+5, 14+1, 20+0) = \max(16,16,15,14,15,15,20) = 20\)
- \(r[8] = \max(p_i + r[8-i])\) for \(i=1..8\): \(= \max(1+20, 5+15, 5+11, 8+10, 10+6, 14+5, 20+1, 18+0) = \max(21,20,16,18,16,19,21,18) = 21\)
- \(r[9] = \max(p_i + r[9-i])\) for \(i=1..9\): \(= \max(1+21, 5+20, 5+15, 8+11, 10+10, 14+6, 20+5, 18+1, 22+0)\) \(= \max(22,25,20,19,20,20,25,19,22) = 25\)
- \(r[10] = \max(p_i + r[10-i])\) for \(i=1..10\): \(= \max(1+25, 5+21, 5+20, 8+15, 10+11, 14+10, 20+6, 18+5, 22+1, 25+0)\) \(= \max(26,26,25,23,21,24,26,23,23,25) = 26\)
DP Table:
| \(r[j]\) |
0 |
1 |
5 |
6 |
10 |
11 |
15 |
20 |
21 |
25 |
26 |
Answer: Maximum revenue for rod of length 10 is \(r[10] = \mathbf{26}\).
One optimal cut: cut into length 7 + length 3 gives \(p_7 + p_3 = 20 + 5 = 25\). Better: length 1 + length 9 gives \(1 + 22 = 23\). Let’s check \(r[10]=26\): achieved by, e.g., \(p_1 + r[9] = 1 + 25 = 26\), and \(r[9]=25\) from \(p_2 + r[7] = 5 + 20 = 25\). So: cut 10 = 1 + 2 + 7, giving \(1 + 5 + 20 = 26\). ✓